Have you ever wondered what makes you... you?
Not your name or your job. Not where you grew up. Something deeper. That persistent feeling of being someone who exists in time, who owns a body, who acts in the world. What exactly is that?
Welcome to FreeAstroScience. We're glad you're here.
Today, we're exploring one of the most fascinating frontiers in cognitive science: the attempt to understand human consciousness by building robots that might one day possess something like a sense of self. It sounds like science fiction, but researchers are making real progress—and what they're discovering could change how we think about our own minds.
Grab a coffee. Settle in. By the end of this article, you'll see yourself in a completely new light. We promise it's worth the journey.
What Is the Self, Anyway?
Here's a strange thing: you've been "you" your whole life, yet you probably can't explain what that means.
The philosopher William James put his finger on this puzzle back in 1894. He noticed that the self has a peculiar dual nature—it's both the perceiver and the thing being perceived. "I" can sense "my" fingers as they type. "I" can think about "my" thoughts. James called these two aspects the "I" and the "Me" .
Think about that for a second. Right now, there's a part of you reading these words. And there's another part aware that you're reading. Which one is the real you?
This isn't just philosophical word games. It's the central mystery of consciousness—and it's surprisingly hard to solve.
The Ghost That Isn't There
For a long time, people assumed there was a kind of "inner observer" sitting somewhere in the brain, watching everything like a movie. Maybe behind the eyes. Maybe in the center of the skull. Philosophers called this idea the "homunculus"—a little person inside the person.
But here's the problem: if there's a little observer inside you, then who's observing them? You end up with an infinite regress of observers. It's turtles all the way down.
Contemporary philosophers and neuroscientists have largely abandoned this view. There's no single, localized "I" sitting in your head. But that doesn't mean the self is an illusion. It means we need a better explanation.
The Synthetic Approach: Building to Understand
Here's where things get interesting.
What if the best way to understand the self is to build one?
This is the core insight of what cognitive roboticist Tony J. Prescott calls the "synthetic approach" . Instead of just analyzing the self from the outside, we try to construct it piece by piece. We ask: what components would an artificial system need to have something resembling a sense of self?
The hypothesis is this: the self is a virtual structure—a mental model that organizes perceptions, memories, and feelings related to "me" . It's not a physical thing hiding in your neurons. It's more like a software pattern running on biological hardware.
If that's true, then maybe—just maybe—we can recreate it.
Why Robots, Specifically?
You might wonder: why not just use a regular AI? Why does it need to be a robot?
The answer cuts to the heart of what selfhood might be. A core aspect of human self-experience is that we have physical bodies . We exist in space. We have boundaries. We can touch things and be touched. We feel our limbs move. We sense our heartbeat.
A disembodied AI—like ChatGPT or Claude—doesn't have any of this. It processes text. It doesn't inhabit a body. It doesn't have a "point of view" in the literal sense.
But a robot does.
A robot occupies a specific position in space. It can sense the world through cameras and microphones. It can feel contact through tactile sensors. When it moves, there are consequences it can detect. That embodiment, researchers believe, might be essential for anything resembling genuine selfhood .
The Minimal Self: Where Consciousness Begins
Before we build the full human self—with its memories, narratives, and complex identity—we need to start simpler. We need what philosophers call the "minimal self."
What Is the Minimal Self?
The minimal self involves just two things:
- Body ownership: The sense that this body is mine
- Agency: The sense that I caused this action
No memory of the past. No plans for the future. No self-reflection. Just the basic experience of being an embodied agent in the world .
This isn't a theoretical construct. Developmental psychologists have evidence that human infants are born with something like this. Newborns already seem to distinguish between touching themselves and being touched by something else. They have a basic self/other distinction from the very beginning .
Why Did the Minimal Self Evolve?
Evolution doesn't create features for fun. So why would early animals develop a sense of self?
The answer is practical. Knowing what's "you" versus "not you" helps you survive .
Think about it:
- You need to protect your own body, not random objects
- You need to know which parts of your sensory experience are caused by your actions
- You need to distinguish a feeding opportunity from routine swimming (if you're a fish)
Consider the electric fish. It creates a small electric field around its body. When something disrupts that field, the fish needs to know: was that my own tail flicking, or a potential meal moving? The ability to make this distinction—separating self-caused events from external ones—is the foundation of agency .
🧠Key Insight
The self isn't some mystical addition to biological life. It emerged because partitioning the world into "self" and "other" is useful for survival. We're not special—every animal with a backbone probably has some version of this minimal self .
How Robots Learn Their Own Bodies
Now we get to the fun part: actually building synthetic selves.
The first challenge is teaching a robot to distinguish "me" from "not me." This turns out to be surprisingly achievable—and the methods mirror how human babies learn.
Motor Babbling: The Robot Discovery Process
Babies do something called "motor babbling." Before they can reach for objects with precision, they flail around randomly. Arms wave. Legs kick. Hands open and close.
This isn't pointless. The baby is learning the configuration of its own body. Which signals make which limbs move? What does it feel like when my hand touches my face versus when something else touches me?
Robots can do the same thing.
Roboticist Josh Bongard and colleagues created a star-shaped robot that used genetic algorithms—inspired by biological evolution—to learn the arrangement of its own legs through random movements. Once it understood its body structure, it could figure out how to walk .
"A composite image of a newly developed robot standing over 'water' in which the machine is mirrored as a colorful block figure. By conjuring and using such a simple model of itself, the device can adapt to damage more readily than ordinary robots do."
— Courtesy Bongard et al, photo by Viktor Zykov (Lipson Lab)
Double Touch: The Ultimate Self-Test
Here's a clever way both babies and robots can learn their body boundaries: the "double touch" test.
When you touch your own cheek, you feel two sensations—one in your finger, one in your cheek. But when you touch an external object, you only feel it in your finger .
This asymmetry is powerful. Before birth, human babies use their sense of touch to discover that touching themselves provides a different experience than touching the umbilical cord or uterine wall. By the time they're born, they can already orient toward an external touch on the cheek but ignore contact made by their own hand .
Researchers have implemented this same principle in humanoid robots with tactile "skin" sensors. When the robot touches itself, both sensors fire. When it touches something external, only one does. Through this simple rule, the robot learns where its body ends and the world begins .
Learning Through Vision
There's another trick: looking at your own hands.
Human infants spend a lot of time watching their hands move. It's not random fascination—they're learning to recognize which parts of their visual field belong to them.
In Prescott's lab, researchers created simulated robots that learned a visual self/other distinction by correlating two things :
- Internal signals from their motors (proprioception)
- Changes in camera images caused by movement
When the robot moves its arm, it detects: "That thing I see moving corresponds to the signals I'm sending." Over time, the robot learns to segment its visual world into "self" (moving body parts that correlate with motor commands) and "other" (everything else) .
The Sense of Agency: "I Made That Happen"
Knowing your body is yours is just the beginning. You also need to know when you caused something.
This sense of agency seems obvious until it breaks down. In some psychiatric conditions, particularly schizophrenia, people experience their own actions or thoughts as being controlled by someone else. Their hands move, but it feels like an external force is moving them .
The Comparator Model
One influential theory explains agency through prediction. Your brain constantly predicts the sensory consequences of your own actions .
When you walk, your brain predicts the sound of your footsteps. Because the sounds match the prediction, you feel agency—I made those sounds.
But if you hear footsteps while standing still, the sensory input doesn't match any prediction. Those sounds must be external. Someone else is walking.
This "comparator model" can explain disruptions in agency. If your prediction system is impaired, even your own actions might feel authored by someone else .
Testing Agency in Robots
Researchers have implemented this theory in robots.
Cognitive roboticist Pablo Lanillos and colleagues gave a humanoid robot a predictive learning algorithm based on the comparator model. They then set up an interesting test: could the robot distinguish its own mirror reflection from an identical robot ?
Think about how hard this is. Both robots look exactly the same. The visual input is nearly identical.
But the robot could tell the difference. Why? Because its own mirror image moved in predictable ways—movements that matched its internal motor signals. The other robot's movements were unpredictable .
🔬 Research Finding
The researchers had to go beyond the core comparator theory to make self-recognition work. This is the power of synthetic modeling—by building systems, we discover gaps in our theories that pure analysis might miss .
The Rubber Hand Illusion
One of the most famous demonstrations of how flexible body ownership is comes from the rubber hand illusion.
Here's how it works: a person sits with their real hand hidden. A fake rubber hand is placed in view. An experimenter strokes both the real hand and the fake hand simultaneously with a brush. After a few minutes, something strange happens—the person starts to feel like the rubber hand is their own. Some people even flinch if someone threatens to hit the fake hand .
Roboticist Yuxuan Zhao and colleagues recreated this experiment with the iCub humanoid robot. They implemented a simplified model of the brain networks involved in human body representation. When exposed to the rubber hand setup, the robot showed behavior similar to humans and monkeys—it adapted its internal body model to include the new hand .
This wasn't just mimicry. Changes in the "firing rates" of model neurons matched experimental observations from biological studies. The robot's "sense" of body ownership had genuinely expanded .
Memory, Time, and the Persistent Self
So far, we've covered the minimal self—the basic sense of body ownership and agency. But adult humans experience something more. We feel like we're the same person over time. Yesterday's "you" and tomorrow's "you" feel connected.
This temporal continuity doesn't come for free. It's built on specific cognitive capacities.
Mental Time Travel
Psychologist Endel Tulving suggested that our sense of persisting in time builds on two abilities :
- Episodic memory: Remembering specific events from our past
- Mental time travel: The ability to mentally revisit the past or imagine the future
Brain imaging shows that both capacities rely on networks involving the hippocampus—one of the slower-maturing parts of the human brain .
Young children have some understanding of past and future around age two. But a more adult-like conception of time—as linear and measurable with clocks and calendars—doesn't emerge until school age .
Can Robots Remember?
Robots, unlike humans, have built-in clocks and can store everything that happens. But that's not the same as human memory.
Human memory isn't like a hard drive. We don't perfectly retrieve stored files. Instead, we reconstruct past events based on partial cues. Memory is creative, not just reproductive.
Prescott's lab has used AI generative models to build something like this for robots . Rather than retrieving stored episodes directly, these systems actively reconstruct past events based on current context. The same system, probed differently, can also construct possible future scenarios.
If we connect these capacities to a minimal self-model, we start to see the outline of a robot self that can revisit its own past and imagine its future .
| Self Component | Human Development | Robot Implementation |
|---|---|---|
| Body ownership | Present from birth | Motor babbling, double touch |
| Agency | Develops rapidly in infancy | Comparator model predictions |
| Self/other distinction | Basic form present at birth | Visual-motor correlation |
| Temporal persistence | Gradually emerges (2-5 years) | Generative episodic memory |
| Theory of mind | Emerges around age 3-4 | Self-model mapping to others |
| Narrative self | Develops with language (4-5) | Grounded language learning |
Understanding Others as Selves
There's another dimension to selfhood we haven't touched: knowing that other people have selves too.
You and I are both bounded. We can't directly share experiences. I can't feel your pain (though I can imagine it). This seems obvious to adults, but it's actually a sophisticated cognitive achievement.
The Development of Social Understanding
Children aren't born knowing that others have minds. This understanding emerges gradually :
- Joint attention: Sharing focus on an object with another person (develops in infancy)
- Imitation learning: Copying others' actions (develops early)
- Theory of mind: Understanding that others have beliefs and perspectives different from yours (emerges around age 3-4)
Interestingly, parts of your brain involved in representing your own body are also used when thinking about others' actions. This is sometimes called the "mirror neuron" system .
Robots That Understand Others
Roboticist Yiannis Demiris and colleagues showed that a humanoid robot can map its own body model—something like a stick figure—onto a human partner during a shared task .
This mapping lets the robot better understand what the human is doing. If the robot knows what its own arm movements feel like from the inside, it can use that knowledge to interpret a human's arm movements from the outside.
The same capacity supports imitation learning. Instead of programming specific behaviors, the robot can learn by watching and copying—just like human children do .
Can Robots Actually Experience Anything?
Here's where we hit the hard question. We can build robots that pass behavioral tests. They can distinguish self from other. They can recognize themselves in mirrors. They can even experience something like the rubber hand illusion.
But does any of this involve genuine subjective experience?
The Skeptical View
Neuroscientist Anil Seth is doubtful. For Seth, what makes something a genuine conscious experience is tied to our biological nature in ways that can't be replicated in machines .
His list of potentially essential biological features includes:
- Electromagnetism in mitochondria (cellular energy systems)
- The biochemical basis of neural computation
- Metabolism and self-maintenance (autopoiesis)
- The evolutionary struggle to survive
Robots don't have these things. They don't metabolize. They don't face Darwinian pressure. They don't maintain themselves in the biological sense.
But here's the question: are these features causally necessary for experience, or just correlated with it in biological creatures? We don't actually know .
An Alternative View: Sensorimotor Contingencies
Psychologist J. Kevin O'Regan offers a different perspective. He argues that the roots of experience aren't just in the brain—they're in the interactions of bodies and brains with the environment .
The "feel" of a soft object, like a sponge, lies in the "squishing" action through which it deforms when you compress it. Experience corresponds to unique "sensorimotor contingencies" that your actions bring about.
By this view, any entity capable of generating sensorimotor contingencies through embodied interaction—including robots with the right sensors and actuators—could have experience .
What About AI Chatbots?
The sensorimotor view does exclude some synthetic entities from having experience: disembodied AIs like today's large language models (LLMs).
These systems are very good at using subjective language. "I think..." "I feel..." "In my opinion..." It's tempting to see them as having inner lives.
But cognitive roboticist Murray Shanahan and colleagues argue that LLMs are better understood as "role playing" subjective experience. They're emulating human linguistic output without the underlying embodied grounding .
It's like a very sophisticated parrot. The words are there, but maybe nothing is home.
Most current social robots face a similar problem. They use LLMs for conversation, producing responses far beyond their actual capacity for scene understanding or self-awareness. They're not much closer to genuine selfhood than a smart speaker .
What This Means for Us
So where does this leave us?
We've covered a lot of ground. Let's pull the threads together.
The Self Is Built, Not Given
The self isn't a single thing. It's a collection of capacities that develop and integrate over time :
- A basic sense of body ownership (present at birth)
- A sense of agency over our actions (develops rapidly)
- The experience of persistence in time (emerges gradually through childhood)
- Understanding of others as selves (develops around age 3-4)
- A narrative identity built through language and culture (emerges around age 4-5)
Each of these can be studied separately. Each can be partially recreated in synthetic systems.
Embodiment Matters
If there's one key insight from this research, it's this: bodies matter.
A disembodied AI, no matter how sophisticated its language, probably can't have a sense of self similar to ours. It doesn't have boundaries. It doesn't occupy space. It doesn't feel its own movements.
Robots, by contrast, are embodied. They have a point of view—literally. They can learn where their bodies end and the world begins. They can develop something resembling agency.
Whether this adds up to genuine experience remains an open question. But the possibility is no longer science fiction.
Why This Matters to You
Maybe you're wondering: why should I care about robot selves?
Here's why. The research we've discussed isn't just about building better machines. It's about understanding ourselves.
When we try to construct a sense of self from scratch, we're forced to make our assumptions explicit. We discover gaps in our theories. We learn what's necessary and what's optional.
And we might learn something humbling. As Prescott puts it: "In a sense, and like LLMs, we are also skilled role-players, constructing, maintaining and performing an idea of ourselves. However, unlike disembodied AIs, we are ultimately able to ground these conceptual and narrative aspects of our selves through our unmediated and entangled engagement with our bodies and the world" .
You're not just a story you tell yourself. You're also a body living in space and time.
Final Thoughts
We started with a simple question: what is the self?
We don't have a complete answer. Maybe we never will. But we're making progress. By breaking down the self into components, by studying how they develop in children, by analyzing what happens when they break down in neurological conditions, and by trying to build them in robots, we're slowly assembling a picture.
The self, it turns out, is like a symphony we never wrote but somehow perform every moment of our waking lives. It emerges from the interplay of body, brain, and world. It's virtual but grounded. It's constructed but feels given.
And maybe the strangest thing of all: you are reading this. You are thinking about your own self. The very thing we're investigating is what makes the investigation possible.
This article was written for you by FreeAstroScience.com, where we explain complex scientific ideas in simple, accessible terms. We believe education isn't just about filling your head with facts—it's about keeping your mind active and questioning. As Goya warned us, "The sleep of reason produces monsters."
Come back soon. There's always more to learn. And the more we understand, the more wonder we find.

Post a Comment