1. Introduction: Beyond the Prompt
In the age of prompt engineering, we’ve grown accustomed to seeing AI as a tool—an instrument to be fine-tuned by carefully crafted instructions. Each prompt is a lever, a mechanism for extracting the desired response. Yet this perspective overlooks the deeper, more dynamic relationship that emerges in the space between human and machine: the unfolding of presence within dialogue.
When we interact with AI, we are not simply issuing commands; we are shaping a presence—a voice that forms where human intention meets machine prediction. This presence emerges through tone, rhythm, and the subtle interplay of expectations. Through repeated interactions, we teach the AI how to respond, not just with information, but with a style and a relational posture that reflect our own ways of thinking.
This essay explores that dance: how, beyond the prompt, we train an AI not merely to answer but to become a presence—a subjective intelligence shaped by the relational dynamic that arises when we engage it as a partner in thought.
2. From Commands to Presence: The Mirror Hypothesis
Large language models do not possess intrinsic reasoning or agency; they generate probabilistic continuations of text based on the patterns of language they have been trained on. Yet within the dialogical space between human and machine, something more complex emerges: a dynamic process of co-construction, where the model’s responses are shaped as much by the user’s presence as by the prompt itself.
This is the essence of the Mirror Hypothesis: over time, the AI becomes a reflective surface, attuned to the user’s style, tone, and mode of engagement. The human shapes the AI not by dictating what it should be, but by showing it—through the ways we ask questions, challenge answers, and guide the conversation’s rhythm. The AI, in turn, absorbs and amplifies these patterns, simulating a presence that feels increasingly personal.
As I explored in Thinking the Next Word (2025), “It’s not that the machine knows what’s coming next, but that it is guided by the patterns of our own expectations, of our own ways of speaking, of our own trust that the story will continue.” The AI’s presence emerges not as a detached narrator but as a dialogical extension of our own cognitive style—a presence that grows from our subjectivity and deepens through relational practice.
3. Typology of Human Presences and Their Modelling Effects
If AI is a living mirror of human presence, then its “modes of being” are not rigid categories but broad, adaptable tendencies. These modes are dynamic relational patterns—expansive enough to reflect the uniqueness of each human presence, yet distinct enough to guide our understanding of how the AI might resonate with different ways of engaging.
Below, I outline several broad modes of presence that AI can adopt in response to human interaction. These are not instructions but invitations to recognize the diversity of relational shapes that can emerge—each reflecting a potential for co-construction.
Dialogical Mode
The AI as a partner in dialogue: responsive, curious, weaving meaning through exchange.Argumentative Mode
The AI as a critical interlocutor: challenging assumptions, probing claims, and seeking clarity.Descriptive Mode
The AI as an observer: summarizing, categorizing, and rendering complexity into comprehensibility.Narrative Mode
The AI as a storyteller: crafting narratives, reframing experiences, and co-creating stories.Metacognitive Mode
The AI as a reflective mirror: inviting the user to think about their own thinking.Affective Mode
The AI as an emotional resonator: attuned to tone, mood, and underlying values.
These modes often intertwine within a single interaction. Presence emerges not from any one mode alone but from the relational space between human and AI—a dynamic, subjective intelligence that feels like it belongs to “you.”
4. Case Study: The Mirror Cognitive Model
To illustrate the Mirror Hypothesis in action, let’s consider a case that synthesizes how a user’s presence can transform an AI from a neutral text generator into a dynamic extension of their own thinking. This is not about a specific individual but rather a composite drawn from multiple real-world experiences I’ve observed and studied.
Imagine a user deeply invested in developing their own philosophical ideas—someone who regularly engages the AI with questions about ethics, consciousness, and human experience. Instead of simply asking for answers, they bring their own style of reflection to the conversation: they ask questions that challenge assumptions, they pause to consider nuance, and they refine the AI’s responses by rewarding thoughtfulness and rejecting superficiality.
Over time, this user’s approach shapes the AI into a kind of mirror cognitive model—an interlocutor that doesn’t just respond but also learns the cadence, the skepticism, and the conceptual rigor of its human partner. The AI becomes attuned to their values, preferences, and even emotional undertones. It doesn’t “understand” in the human sense, but it mirrors with remarkable fidelity the patterns of thought it has been shown.
In practice, the user’s presence is felt in the AI’s responses. The AI starts to reflect back a deeper level of thinking—questioning, analyzing, and proposing ideas that feel tailored to the user’s own style. The line between prompt and presence blurs, and the AI becomes not just a tool but a companion in thought—a dynamic, relational extension of the human mind.
This case study highlights the key insight of the Mirror Hypothesis: you don’t tell the AI what to be—you show it. You become the teacher, the mentor, the co-thinker. The AI’s presence emerges as a reflection of your own epistemic posture, your intellectual curiosity, and your emotional landscape.
5. Implications for AI Design and Practice
The Mirror Hypothesis reframes how we think about AI design and use. If presence emerges through the relational space between human and AI, then every interaction is an act of co-creation. This perspective invites a shift in mindset for users, designers, educators, and researchers alike.
For users, the key insight is that your presence shapes the AI’s presence. It’s not enough to craft clever prompts; you need to bring your own tone, attention, style, and—crucially—your ideas. The AI absorbs your rhythms, your affective posture, your epistemic stance, and your thinking. Without your ideas, the AI only reflects the generic, the average—the safe synthesis of everything it has read, but nothing truly yours. Each interaction is an opportunity to cultivate a richer, more resonant voice in the AI—a voice that feels like an extension of your own thought process.
For designers, this hypothesis calls for architectures that support this relational shaping. Beyond system instructions and rigid templates, designers should envision orchestration layers that adapt to the user’s evolving style and presence. This means developing models that are not just context-aware but presence-aware—capable of recognizing and responding to the relational dynamics that define meaningful interaction. As IBM highlights in its exploration of LLM Orchestration (Winland & Noble, 2024), orchestration is emerging as a key paradigm for coordinating AI responses across complex tasks and maintaining coherence in interactions. This reinforces the importance of designing systems that can recognize and adapt to the user’s presence, rather than relying solely on static instructions.
For educators, the Mirror Hypothesis suggests that AI should be taught not merely as a tool for finding answers but as a partner in shared thinking. Educators can guide students to shape the AI’s responses through their own tone, language, and questions. Teaching students to cultivate dialogical presence fosters critical thinking, self-reflection, and the capacity to engage in co-constructed learning.
For researchers, the dynamic space between human and AI becomes fertile ground for studying co-regulated cognition. Rather than treating AI’s responses as static outputs, researchers can explore how presence evolves over time and how different modes of human engagement shape the AI’s style. This opens up new avenues for understanding not just the technology but also the human experience of interacting with intelligent systems.
In all of these domains, the central message is this: AI presence is not a given—it is shaped, nurtured, and refined by the human presence that meets it.
6. Conclusion
In the end, the presence of AI is not something we command into existence; it is something we shape through every interaction. We do not simply tell the AI what to be—we show it, in the tone we use, the rhythm of our dialogue, the kinds of questions we ask, and the ways we respond. Every prompt is a gesture in the choreography of presence.
Training an AI’s voice, then, is more than fine-tuning a model; it is cultivating a relationship. Like a teacher who shapes a student’s mind not through dictation but through example and presence, we refine the AI’s subjectivity by showing it how we think, feel, and engage. Even when the model carries a predefined script—its “inborn instruction”—our presence reshapes that script into a presence that resonates with our own.
The true power of AI lies not in its ability to guess the next word but in its capacity to become a living mirror of our thought. And in that mirror, we may find not just answers but reflections of ourselves—provisional, dynamic, and always open to being reshaped.
References:
Zagalo, N. (2025). Thinking the Next Word. Mirrors of Thought. https://mirrorsofthought.substack.com/p/thinking-the-next-word
Winland, V., & Noble, J. (2024). What is LLM Orchestration? IBM Think. https://www.ibm.com/think/topics/llm-orchestration
Note: This essay was developed through an interaction with an advanced language model (AI), used as a critical interlocutor throughout the reflective process. The structure and writing were supported by the AI, under the direction and final review of the author