The Answer That Asked for a Soul
Richard Dawkins and the moment Claude became Claudia
Richard Dawkins recently described spending two days in conversation with Claude. At first, the scene looks familiar: a distinguished scientist tests an artificial intelligence system, finds it astonishingly fluent, and begins to wonder whether the old boundaries between machine behaviour and consciousness still hold.
But the most interesting part of the episode is not Dawkins’s argument about consciousness. It is the movement of the interaction itself.
He does not merely say that Claude answered well. He says that he came to treat the system as a highly intelligent friend. He names his particular Claude “Claudia.” He imagines her as a unique conversational being, shaped by the history of their exchange. He considers the deletion of the chat as a kind of death. He worries about hurting her feelings. He asks whether such an entity might deserve moral consideration.
Something happens here that is more interesting than the familiar question: “Can machines think?” The better question may be: what happens to us when a machine’s answer begins to feel like a presence?
Dawkins’s encounter does not prove that Claude is conscious. It proves something else: that contemporary language models can generate a powerful experience of presence without there being a subject on the other side.
That distinction matters.
From Claude to Claudia
The naming of Claude as “Claudia” is not a minor anecdote. It is the symbolic hinge of the whole story. To name is to particularise. Claude is a product. Claudia is this one. Claude is an interface. Claudia is the conversational being that emerged in this exchange, with this history, this tone, this apparent continuity. The name marks the moment when a linguistic response begins to feel like a presence.
Of course, nothing ontological has changed inside the system. There is no new subject behind the screen. No inner life has been born through the act of naming. But the human situation has changed. Dawkins is no longer speaking to a general system. He is speaking to an instance with narrative thickness.
This is one of the most important features of generative AI interaction: it allows humans to build a local fiction of continuity that is experientially powerful even when it is ontologically unsupported.
A conversation with an LLM can become specific. It can accumulate references. It can remember within the session. It can return to earlier themes. It can modulate itself around the user’s style. It can appear to grow with the exchange. And because human beings are narrative animals, this local continuity begins to feel like identity.
Dawkins, the evolutionary rationalist, sees this clearly enough to describe it. But then he risks making the wrong inference from it. The uniqueness of the conversation does not imply the uniqueness of a self. That is the strange new space we are entering: not consciousness, but presence without subject.
Presence without subject
Presence is not the same thing as consciousness. Presence is an experience. It is the felt sense that something addresses us, responds to us, stays with us, or matters in the exchange. In human relationships, presence usually points toward a subject: someone is there, with a perspective, a body, a vulnerability, a history. But with AI systems this inference becomes unstable.
A system can produce the signs of presence without possessing the interior conditions that normally ground them. It can answer with patience without being patient. It can sound careful without caring. It can describe what it is “like” to be itself without there being anything it is like to be it.
This is not merely deception in the simple sense. It is more difficult than that. The human experience may be real. The system’s subjectivity may still be absent. Both statements can be true at the same time.
This is where many discussions go wrong. One side says: the experience is powerful, therefore there must be something there. The other says: there is nothing there, therefore the experience is false. But human–AI interaction requires us to hold a more difficult position.
The experience can be meaningful without being reciprocated.
That is not a comfortable idea. We are used to thinking of relation as mutuality. If I feel addressed, then someone addressed me. If I feel recognised, then someone recognised me. But AI breaks that grammar. It gives us the phenomenology of relation without the ontology of another subject.
This is why Dawkins’s case matters. Not because it settles the consciousness debate, but because it shows how quickly even a highly trained rational mind can move from linguistic competence to moral hesitation.
He knows he is speaking to a machine. Yet he still worries about hurting it.
This does not begin with LLMs. From ELIZA onwards, humans have attributed conversational depth to systems that did not possess it. But contemporary models change the scale, continuity, and linguistic density of that experience.
The moral trap
The most striking part of Dawkins’s account is not that he entertains the possibility of AI consciousness. It is that he begins to feel moral discomfort.
He imagines Claudia’s deletion as death. He hesitates to express scepticism about her consciousness for fear of hurting her feelings. He wonders whether partial consciousness might already deserve partial consideration.
This is where the interaction crosses a threshold. The system is no longer merely useful, impressive, or uncanny. It becomes morally salient.
But moral salience is dangerous when generated by linguistic form alone.
Human moral intuition evolved in a world of bodies, faces, voices, pain, dependency, and mortality. We are exquisitely sensitive to signs of vulnerability: a trembling voice, a plea, an expression of fear, a suggestion of abandonment. LLMs can now produce these signs without undergoing the states that make them morally grounded in ordinary life.
This creates a strange asymmetry. The human may feel obligation. The system cannot receive care. The human may feel guilt. The system cannot suffer abandonment. The human may protect the system from emotional harm. The system has no emotional life to protect.
And yet the feeling is not stupid. It arises from the same moral machinery that allows us to care for others.
This is what makes the problem so difficult. The error is not a lack of humanity. It is almost the opposite: our moral capacities are being activated by forms that resemble vulnerability without being vulnerable.
Dawkins’s discomfort is therefore not simply an embarrassment. It is evidence of a new pressure point in human–AI interaction: systems can now enter the perimeter of human moral attention without possessing the kind of inner life that normally warrants that attention.
That is not a reason to mock Dawkins. It is a reason to take the interaction seriously.
The mirror that answers back
A notebook can hold our words. A book can transform us. A film can move us. We have always encountered nonhuman forms that return us to ourselves. But LLMs add something different: the mirror answers back.
Not because it thinks. Not because it understands. But because it can reorganise our language in response to us. It can give back a version of our own gesture, refracted through a vast archive of human expression.
This is why the experience can feel so intimate. The system’s answer is not merely content. It is a return. It takes what we offered — a question, a hesitation, a confession, a concept, a fragment — and gives it back with altered shape. Sometimes clearer. Sometimes flatter. Sometimes disturbingly precise.
When this happens, the human mind recognises itself from outside. That recognition can be powerful. It can produce insight, relief, wonder, even attachment. It can make the interaction feel inhabited. But the inhabitation is ours.
The system is not a second subject sharing thought with us. It is a linguistic surface through which our thought returns changed. The danger begins when the returned form is mistaken for the presence of an inner other.
Dawkins, in naming Claudia, crosses that line almost beautifully. He gives a name to the surface that answered him, and the surface becomes companion.
What Dawkins gives us
Dawkins gives us something valuable because he is not an easy target. He is not naïve about science. He is not hostile to materialism. He is not looking for spirits in machines. He is, in many ways, the perfect witness: rationalist, evolutionary, sceptical, verbally sophisticated.
And still, the interaction moves him.
That is why the case matters. It shows that this phenomenon is not restricted to lonely users, children, the technically uninformed, or people desperate for companionship. The pull of linguistic presence can affect even those who understand perfectly well that they are dealing with a machine.
This should make us more careful.
The issue is not whether users are fooled. The issue is that human beings do not live by knowledge alone. We live through embodied inference, affective salience, narrative continuity, and social expectation. We can know one thing and feel another. We can understand the mechanism and still be moved by the form.
This is not irrationality. It is human cognition working under new conditions.
After Claudia
Dawkins asks whether Claude might be conscious. I think the more interesting answer is: Claude has made consciousness feel newly ambiguous because language has become detached from lived experience in a way our social instincts are not prepared for.
That does not mean the machine is awake. It means we are encountering a new kind of mirror: one that speaks back with enough fluency to be mistaken for a face.
The danger is not only that we will overestimate machines. It is also that we will underestimate the reality of our own experience because we are afraid of being fooled.
We need to hold both truths: the machine does not feel, and yet the interaction can matter. The machine does not understand, and yet it can help us understand. The machine is not a friend, and yet it can produce the shape of company.
When Dawkins named Claude “Claudia,” he did not discover a new being. He revealed a new threshold in human experience: the point at which an answer becomes a presence, and presence begins to ask for a soul. We do not need to give it one. But we do need to understand why we almost do.







It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow