Introduction
We live in times when catastrophic and messianic discourses about artificial intelligence (AI) dominate the public space. People talk about “bloodbaths” of 50% of entry-level graduate jobs and economic growth of 30% per year, as if the future of society were entirely in the hands of language models trained on internet data.
It is curious that, amidst this media hysteria, figures like Eric Schmidt come on the TED stage to claim that AI is “underhyped”, as if the topic were not already the subject of thousands of articles, conferences, billion-dollar investments, and public discussions placing it at the center of global attention. To say that AI is “underhyped” is more than ignoring the real world: it is a demonstration of the inability to understand the present while wanting to dictate what the future will be. It is the supreme irony of those who propose to redesign society, yet are unable to perceive what surrounds them.
AI has the potential to transform the world, but we need to put things in their proper place so that people, especially students and professionals in training, are not victims of this disproportionate hype.
Buzzwords and the Enchantment with LLMs
In their interviews, Schmidt and other AI enthusiasts speak with almost messianic enthusiasm about language models capable of “writing papers in minutes” and “strategic planning,” claiming that AI is about to “invent like Einstein.” This rhetoric, often reinforced by expressions of wonder (“amazing,” “unbelievable”), creates the illusion that we are facing a form of superior intelligence, capable of replacing the human mind in complex tasks.
Large Language Models (LLMs), such as ChatGPT or DeepSeek, are statistical models that operate through probability calculation. As Tyler Austin Harper says in the article Artificial Intelligence Is Not Intelligent (The Atlantic), LLMs do not “think,” “feel,” or “understand” — they are word-prediction machines trained to mimic linguistic patterns. Confusing linguistic output with thought is an epistemological and rhetorical error.
This confusion is skillfully exploited in the speeches of technological leaders, who present AI as an almost magical entity. This fascination arises from a well-known cognitive phenomenon: our tendency to attribute mind and intentionality to text, which Emily M. Bender and Alex Hanna call the “anthropomorphism effect” (in their recent book The AI Con). AI speaks like a human, but it is not human.
Here comes one of the central ideas for understanding the true nature of AI: AI as a Dialogical Mirror. Without human-critical dialogue, without questioning, AI merely repeats learned patterns. The true value of AI does not lie in “replacing the human,” but in enabling humans to reflect on themselves, confront the machine’s output, test hypotheses, and challenge ideas. Without the Dialogical Mirror, the LLM is just a Statistical Mirror — a limited reflective surface, incapable of returning the complexity of human experience.
When Schmidt talks about AI being able to write papers, he omits that:
A scientific paper is not just text. It requires a research hypothesis, critical review, validation of results, and, above all, scientific originality.
The LLM merely reorganizes linguistic patterns. It can be useful as a brainstorming tool but does not replace the creative and critical process of research.
AI has no notion of methodology, nor of the social or scientific relevance of the data it processes.
The illusion of “creative AI” fuels expectations that have no basis in the internal functioning of the models or in real scientific practice. This fascination, promoted by CEOs and technologists, mainly serves to feed the idea of technological inevitability, a discourse that, as history shows, is often used to justify investments, subsidies, and political reforms in favor of certain economic groups.
LLMs are extraordinary as support tools: they can help organize information, find writing patterns, and even generate drafts. But they do not have the capacity to question, interpret, or create new knowledge.
The 50% Graduate Job Bloodbath
Dario Amodei claims that AI will cause a “bloodbath” of 50% of entry-level graduate jobs, a prediction that, repeated ad nauseam in the media, becomes an uncontested truth. Eric Schmidt, for his part, reinforces the triumphalist tone by promising 30% annual economic growth with the help of AI.
These forecasts, in addition to being alarmist, ignore decades of economic and sociological research on the introduction of technology in the workplace. Talking about 30% sustained annual growth is more Silicon Valley marketing than economic science. Moreover, it is economically contradictory to expect that a technology which supposedly destroys 50% of qualified jobs would simultaneously generate a 30% economic growth. After all, reducing half the workforce in skilled positions would most likely reduce — not expand — overall economic productivity and consumption.
The idea that AI will “wipe out” 50% of qualified jobs is equally fallacious. AI, as a Dialogical Mirror, only returns to humans the statistical patterns we provide it — it does not autonomously create new work worlds. Yes, AI can automate repetitive and low-value tasks. But creativity, critical thinking, empathy, and adaptation to cultural contexts remain human competencies.
Moreover, this “bloodbath” discourse serves an ideological legitimization strategy: diverting the debate about labor precariousness and social inequality to a supposed “inevitable future” caused by AI; justifying massive investments in AI as the only solution to “not be left behind”; and instilling fear in workers and students, pushing them towards uncritical obedience to corporate interests. This strategy is further reinforced by another of Amodei’s recent opinion pieces in the New York Times, where he calls for transparency while repeating catastrophic scenarios that sustain the hype and technological inevitability narrative.
AI can and should support professionals, making them more efficient in repetitive tasks and helping them explore new forms of organization. But it does not replace people. What replaces people are political decisions: budget cuts, labor deregulation, and disinvestment in education. It is therefore essential to strengthen digital and social literacy so that AI is integrated as a Dialogical Mirror — helping people to think critically about their own roles and to adapt to changes with autonomy, instead of becoming hostages of apocalyptic narratives.
AI Scientist: Myth vs Reality
Einstein did not invent the theory of relativity by chance or divine inspiration: he built it upon a century of scientific and philosophical debates. As Newton famously said, “If I have seen further, it is by standing on the shoulders of giants.” Scientific creativity emerges from human interaction — from the exchange of ideas, constructive criticism, and interdisciplinary synthesis.
AI, as a Statistical Mirror, does not generate disruptive creativity. Its role is to reorganize existing patterns from its training data and return them based on probabilities. It cannot identify relevant analogies between areas of knowledge or assess the epistemic importance of those analogies.
François Chollet, creator of the ARC benchmark, argues that LLMs are essentially large memory banks that don't demonstrate intelligence, but rather statistical interpolation. The difficulties these models present in ARC puzzles show precisely this limitation: they lack the ability to synthesise truly new solutions, functioning only as mirrors of what they have seen before.
Furthermore, AI does not live embedded in a scientific community. It does not discuss with peers, face critical resistance, or subject ideas to experimental validation processes. AI only returns linguistic patterns. Without the Dialogical Mirror, AI remains a machine of imitation, not invention.
Apple’s paper, The Illusion of Thinking (Shojaee et al., 2025), demonstrates that even in puzzles of increasing complexity, reasoning models (LRMs) quickly collapse in performance. Increasing “thinking complexity” (Chain-of-Thought) does not resolve the fundamental limitation: AI remains a statistical mirror, reorganizing known patterns but without genuine capacity for innovation. This study reinforces that AI is not “Einstein,” but a reproducer of patterns — no matter how sophisticated the chains of reasoning it builds.
The article “Inside the Secret Meeting Where Mathematicians Struggled to Outsmart AI” (Scientific American, 2025) illustrates well how even expert mathematicians can be surprised by AI solutions to complex problems. But this surprise does not mean autonomous creativity by the machine: it results from its statistical ability to explore vast solution spaces, something that sometimes defies human intuition. This phenomenon reveals more about human limitations in dealing with statistics than about supposed creativity of AI.
AI can be extremely useful in identifying unexpected correlations in large volumes of data. It can support the scientific discovery process as an exploratory analysis tool. But it does not replace the human role of generating hypotheses, questioning assumptions, and creating meaning. The future of science will continue to depend on humans capable of critical dialogue, not on machines that merely return fragments of language.
The Synthetic Data Simulacra
Eric Schmidt, among others, claims that because “we have exhausted real data,” we can now simply generate new data — “and we can easily do that, because it’s one of AI’s functions.” This assertion suggests that AI can autonomously create high‑quality data, filling any information gaps and sustaining economic and scientific growth.
This view reveals a profound misunderstanding of the nature of data, especially data relevant to the human and social sciences, where statistical variability is not mere mathematical randomness, but a reflection of social, historical, and cultural contexts.
Mathematically generating synthetic data is easy: just apply statistical models and simulate patterns. But that does not create relevant novelty, nor introduce the variability that characterizes human experience. The great illusion here is confusing synthetic data with data that carry human meaning. In reality, that data replicate existing biases, amplifying errors and prejudices. Worse: when generated automatically, they can reinforce knowledge bubbles, making critical questioning even harder.
And here again the importance of the Dialogical Mirror emerges: only critical human interaction, capable of confronting data with hypotheses, contexts, and interpretations, can transform information into knowledge. Without that mediation, AI does no more than duplicate patterns — it becomes a mirror of a mirror.
Trump’s Economy, Tariffs, and Misplaced Blame
Eric Schmidt and other technologists describe AI as the primary culprit for current economic issues, as if it were responsible for crises in production chains, rising component costs, and inflationary pressures. For Schmidt, for example, AI’s energy needs and supply difficulties (often tied to China) are proof that we are experiencing an AI‑driven economic transition.
This rhetoric conveniently omits the devastating impact of protectionist economic policies implemented during the Trump administration, such as tariffs on Chinese goods, which disrupted global supply chains and increased logistical and production costs across multiple sectors.
By attributing these problems to AI, as if it were an “autonomous economic agent”, Schmidt and other technologists shift the debate away from the real causes: concrete political decisions, business choices; and structural inequalities in access to capital, technology, and job opportunities.
This phenomenon reveals a dangerous inversion of responsibility: instead of holding political and corporate actors accountable, the blame shifts to a technology, depersonalizing politics and removing accountability from those who actually made the decisions.
Who Really Matters: People
Amid all the hype — bloodbaths of 50%, 30% economic growth, AI “inventing like Einstein” — real people get lost: students who fear they are studying for unemployment, workers who dread becoming disposable, and citizens who suddenly find themselves spectators of a dystopian future designed by technologists.
When we switch off critical thinking, we become hostages of the technofetishist narrative that presents AI as inevitable and superhuman. It is crucial to refocus the debate on people:
Students need digital and social literacy, not fear. They need to know AI is a support tool, not a replacement — and that adaptability, creativity, ethics, and empathy will remain irreplaceable competencies.
Professionals need to understand that AI can automate routine tasks but also augment their work, provided they are involved in its design, adaptation, and critical use.
Citizens need public policies that democratically and transparently regulate AI, preventing it from becoming a weapon of economic or political manipulation.
Without this focus on people, AI becomes just a tool of power, for technologists and large corporations that dominate public discourse.
Conclusion
Artificial intelligence is shaping our present and our future. But the way we describe it — and how we accept or reject these descriptions — makes all the difference.
The hegemonic narrative, fueled by figures like Eric Schmidt and Dario Amodei, uses buzzwords and hyperbolic predictions to transform AI into an autonomous and inevitable agent, capable of replacing human work, revolutionizing science like Einstein, and generating 30% annual economic growth. This narrative, while frightening, is also seductive, and serves mainly to legitimize political and corporate investment strategies and social disengagement.
But the reality is different. Large Language Models do not think, do not invent, do not feel. They are statistical machines for linguistic reorganization, whose usefulness lies in enhancing human productivity, not replacing it. AI does not create human-meaningful data out of thin air. And it certainly does not solve economic crises caused by misguided political decisions.
As Paul Formosa warns in The Conversation (2025), AI, far from ushering in a “cognitive revolution,” risks impoverishing imagination by promoting uniform and predictable responses. Instead of opening pathways to creativity, it reinforces what is already in the data, lowering human potential. This warning reinforces the importance of critical dialogue in the use of AI: only with demanding human thinking can we avoid AI turning us all into mere repeaters of statistics.
This is where the concept of the Dialogical Mirror becomes central: AI only returns something of value when we interact with it critically, inquisitively, and reflectively. Without human dialogue, AI is just a Statistical Mirror — a repository of patterns, biases, and simulations.
Gary Marcus, in multiple articles, has drawn attention to the need for hybrid models that combine statistical learning with symbolic reasoning, so that AI can truly serve society in a robust and reliable way. Marcus reminds us that “today’s AI systems are great at recognizing patterns, but they completely fail when it comes to deep understanding” — exactly what distinguishes a useful tool from an entity capable of transforming science or human work.
For AI to be a tool for progress — and not a force for the destruction of jobs and rights — we need digital literacy, democratic regulation, and an ethical vision that puts people at the center. We cannot delegate our autonomy and creativity to mathematical models, nor accept predictions of “bloodbaths” and economic miracles as unquestionable truths.
AI is not a prophecy. It is a technology.
References
Amodei, D. (2025). Regulating AI’s future requires transparency. The New York Times. June 5. URL: https://www.nytimes.com/2025/06/05/opinion/anthropic-ceo-regulate-transparency.html
Bender, E. M., & Hanna, A. (2025). The AI Con: How to Fight Big Tech's Hype and Create the Future We Want. HarperCollins.
Chiou, L. (2025). Inside the Secret Meeting Where Mathematicians Struggled to Outsmart AI. Scientific American. URL: https://www.scientificamerican.com/article/inside-the-secret-meeting-where-mathematicians-struggled-to-outsmart-ai/
Harper, T. A. (2025). Artificial Intelligence Is Not Intelligent. The Atlantic. URL: https://www.theatlantic.com/culture/archive/2025/06/artificial-intelligence-illiteracy/683021/
Marcus, G. (2025). Marcus on AI. Substack. LINK
Patel, D. (2024). Francois Chollet – Why The Biggest AI Models Can’t Solve Simple Puzzles [Vídeo]. YouTube.
Schmidt, E. (2025). The AI Revolution Is Underhyped [Vídeo]. TED. LINK
Shojaee, P., Mirzadeh, I., Alizadeh, K., Horton, M., Bengio, S., & Farajtabar, M. (2025). The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models. Apple. URL https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf
VandeHei, J., & Allen, M. (2025). Behind the Curtain: A white‑collar bloodbath. Axios. URL: https://www.axios.com/2025/05/28/ai-jobs-white-collar-unemployment-anthropic
Zagalo, N. (2025). Don’t tell the AI what to be—show it. Mirrors of Thought. Retrieved from https://mirrorsofthought.substack.com/p/dont-tell-the-ai-what-to-be-show
Zagalo, N. (2025). The Dialogical Mirror. Mirrors of Thought. Retrieved from
https://mirrorsofthought.substack.com/p/the-dialogical-mirror
Note: This essay was developed through an interaction with an advanced language model (AI), used as a critical interlocutor throughout the reflective process. The structure and writing were supported by the AI, under the direction and final review of the author