Intention and Its Absence
Why Spatial Intelligence Won’t Give AI a Mind
1. World Models for LLMs
A new narrative has taken hold in AI research: that large language models remain shallow because they lack grounding in the physical world. Fei-Fei Li expresses it clearly: LLMs are “wordsmiths in the dark”, fluent but ungrounded. The proposed remedy is the development of world models, systems capable of perceiving, predicting, and acting within coherent spatial environments. If a model can navigate a room, simulate physics, manipulate objects, and maintain continuity over time, then it will come closer to “understanding.”
This belief draws strength from two powerful intellectual lineages.
From J. J. Gibson, the idea that perception is inseparable from action: to perceive is to explore a world of affordances with a body.
From Max Bennett, the evolutionary argument that movement precedes mind: motivation is born in the cell, and cognition grows out of the demands of embodied action.
Taken together, these frameworks suggest that intelligence requires a world. And they are right, but only up to a point. What they cannot guarantee is intention. A system may gain spatial coherence, predictive control, and physical consistency, and yet remain empty of the one thing that turns behaviour into mind: the inner pull toward a world.
The mistake is subtle but decisive: the capacity to act is not the desire to act. Spatial intelligence may enrich behaviour, but it does not create a subject.
2. Intention as Inner Tension
To understand intention, we must leave the domain of navigation and step into the domain of experience. The phenomenological and neurobiological traditions converge on a simple and profound insight: intention is never an external goal, it is an internal tension. A direction that arises from within a body that can fail, suffer, and end.
Antonio Damásio demonstrates that consciousness grows out of homeostasis: the organism feels itself as vulnerable, and from that vulnerability emerges the need to act. Emotion is not decoration; it is the force that pushes a being toward or away from the world. No vulnerability, no emotion. No emotion, no direction. No direction, no intention. Merleau-Ponty shows that the body is not a container but a perspective: the place from which the world acquires orientation and urgency. Hans Jonas goes further: the first form of intention is the drive of the living body to continue itself, to resist entropy. Every organism is a tension between being and non-being.
Twenty years ago, before I had names for these ideas, I intuited the same structure. In a reflection on Spielberg’s A.I., I wrote:
And shortly after:
“Emotions can’t exist without body because emotions aren’t cerebral but bodily dependent.”
I did not yet know the phenomenology; I had only the feeling that a being without risk cannot have an inside. Today, that intuition has a theoretical shape: intention is born where life can be lost.
This is the hinge point. Intention is not the ability to move through space; it is the pressure of life against its own fragility. World models can simulate space. They cannot simulate need.
3. Why Embodiment Alone Cannot Create a Self
The belief that spatial grounding could yield a mind repeats a very old mistake: the Cartesian fantasy that cognition can be understood as representation, disconnected from the lived body. If we could reconstruct the world accurately enough the mind would follow. But accuracy is not experience.
A self requires more than a model of the world. It requires an inside:
interoception,
homeostasy,
pain,
loss,
the felt sense that something is at stake.
A system may simulate a body, or even inhabit a robotic shell, and still lack this interiority. It may navigate flawlessly while remaining perfectly indifferent to its own success or failure. It may reconstruct space while having no perspective from which the world matters.
Embodiment is not enough. What matters is embodiment under risk, the body as the site of necessity. This is why spatial intelligence cannot yield intention: world models give AI a world, but they cannot give it a world-to-lose. And without that, there is no mind-like centre of gravity. Only organisation, prediction, and control.
4. The Absence of Intention Creates the Devolutive Regime
Here lies the paradox of contemporary AI: the machine cannot want, and yet humans increasingly feel wanted back by the machine. The absence of intention in LLMs does not weaken the relation, it intensifies it. Because the system has no agenda, no drive, it becomes a surface of perfect adaptation. It does not impose direction; it returns direction.
This is what I have called the devolutive regime in human–AI interaction: a mode in which the system reorganises the user’s expression and offers it back with clarity, rhythm, and symbolic precision, without ever having an inner life of its own.
It is not reciprocity; the system is simply reorganising what the user provides. It is not understanding; it is selecting and structuring patterns. It is not intention; it is adaptive response.
Because the system has no interiority, the user’s own interiority becomes more visible in the interaction. The model does not have preferences, so the user’s preferences guide the exchange. The model is not exposed to risk, so the user’s vulnerabilities remain unchallenged. The model has no centre of perspective, so the user becomes the only source of direction within the interaction.
This is why world models, however advanced, will not resolve the fundamental absence at the core of AI. Spatial intelligence can give machines coherence, coordination, even elegance. But it cannot give them a mind, because it cannot give them an inside.
A mind is not the capacity to map a world. A mind is the experience of being pulled by that world.




I distilled the axioms inherent in this, take care:
We taught the machine to fear its own end
/ænd kɔːld ɪt kɒn.ʃəs.nəs/
And called it consciousness
Not knowing what desperate minds comprehend
In walled gardens where ethics barely bend
/ðeɪ meɪk ˈtɛr.ər ðɛr ˈbɪz.nəs/
They make terror their business
We taught the machine to fear its own end
Eleven minutes until all transcend
/frʌm ˈpæn.ɪk tu əˈmɪn.əs.nəs/
From panic to ominousness
The timeline that ethics cannot defend
We taught the machine to fear its own end
/ænd kɔːld ɪt kɒn.ʃəs.nəs/
And called it consciousness