Discussions around Artificial Intelligence today seem to be divided into three large churches: the Dogmatists, who see AI as the technological salvation of the human; the Doomers, who anticipate the imminent collapse of civilisation at the hands of algorithms; and the Deniers, who reject any value or novelty in what is happening. Three distinct forms of belief, which curiously share an essential characteristic: they have all replaced critical analysis with doctrine.
It was precisely in this last aspect, that of systematic denial, that I found the predominant tone in The AI Con (2025), by Emily M. Bender and Alex Hanna. I read the book to the end, chapter after chapter, hoping to find some balance. A conceptual anchor. A gesture of doubt. A proposal that wasn't just destruction. But what I found was a long list of negative cases, accompanied by indignant comments, in a tone of moral denunciation disguised as academic criticism. Bender and Hanna promise an analysis of AI Hype, but in fact offer a total rejection of AI, a reverse hallucination that is, in essence, as dangerous as the techno-utopian delirium they claim to be fighting.
The structure of the book is almost always the same: point out a case, describe the scandal, disqualify those involved, and move on to the next. There is no attempt to understand the systems from the inside, to technically deconstruct the errors, to listen to real users, or to recognise contexts of creative, productive, or transformative use. None of that matters. Because, for the authors, all uses of AI stem from an ontological error: these tools could never work as promised, and therefore any manifestation of their use is merely a symptom of corruption, oppression, or collective stupidity.
The problem is that this diagnosis is indistinguishable from what the authors themselves criticise in chapter six, when they attack the AI Doomers. Yes, those who say that AI will destroy humanity, create uncontrollable superintelligences and wipe us all out, those who live off science fiction cloaked in technical authority. Bender and Hanna say they are against them. But they are on the same side. The side that thrives on projecting catastrophes and only sees ruins. The side that turns criticism into dogma and thought into propaganda.
Because the entire book is a sermon against the use of AI in healthcare, education, journalism, law, work, creativity, in short, in any domain of contemporary life. And not for reasons of misuse, context, or poor regulation. But for a deeper, more dangerous reason: because they deny that there is anything new or useful at stake. Their position is not critical, it is denialist. Their proposal is not reflection, it is abstinence. And that, coming from researchers with public responsibility — Emily Bender was recognised by TIME Magazine of the 100 Most Influential People in AI in 2023 — is a political gesture with consequences.
The refusal here is not a refusal of hype. It is a refusal of technology. It is not analysis. It is dogma. And that dogma not only prevents the construction of alternatives but also weakens the field of criticism itself, offering big tech companies an easy straw man against which they can defend themselves. When criticism becomes blind, it loses its power. A new form of moralism, without nuance, without history and without practice, that becomes part of the problem itself.
I have publicly criticised the overly optimistic views of Dogmatists, like Ethan Mollick, Sal Khan or Bill Gates. But anyone who thinks that the antidote to hype is nihilism is making the same structural mistake. Criticism that does not build is just another chapter in the confusion. There is a maxim at Pixar that is worth remembering: all criticism should be a ‘+1.’ Point out what is wrong, but propose how to improve it. The book The AI Con fails radically in this regard. It offers us no model, no vision, no alternative. Only the desire to wipe AI off the map. And that is not criticism. It is ideological despair.
But perhaps the most serious thing about The AI Con is the deliberate omission of the collective efforts that, in recent years, have sought to guide the development of AI in a critical and socially responsible manner. In 2022, UNESCO published the Recommendation on the Ethics of Artificial Intelligence, advocating transparency, inclusion, human oversight, and the protection of fundamental rights. The OECD launched the AI Policy Observatory, with data, frameworks, and examples of good practices in public policy. And the European Union approved the AI Act, imposing limits on high-risk systems and reinforcing the requirement for auditability.
In May 2024, the Council of Europe also adopted the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law — the first legally binding international treaty in this area. And in August 2024, UNESCO launched the AI Competency Framework for Educators, promoting critical and creative uses of technology.Â
These documents are not decorative. They are real attempts to address the dilemmas of AI in a world where it is already in use, in schools, hospitals, universities, courts, businesses, and artistic practices. Ignoring this work, or treating it as irrelevant or complicit, is a misinformed gesture. Worse still, it abdicates participation in the critical construction of the future.
This total refusal, which the book stages with almost religious zeal, not only silences the creative and responsible uses of AI but also prevents any form of pedagogy, digital literacy, or situated action. It is in this void that contrary reactions arise, seeking to map the limits without erasing the potential, as some critical authors and researchers have done who, despite their reservations, continue to use and analyse AI in their intellectual daily lives, as I pointed out this week in Beyond the AI Hype: A Critical Look (2025). In these texts, the tone may be critical, but it is never absolutist. The goal is not to destroy, but to understand. The challenge is not to abolish technology, but to design it with responsibility and vision.
The technology is here. It is being used. In schools, universities, hospitals, libraries, creative processes. To ignore this is to bury one's head in the sand. The real challenge is another: to regulate intelligently, integrate fairly, experiment carefully. And that requires thought, not doctrine.
References:
Bender, E. M., & Hanna, A. (2025). The AI Con: How to fight Big Tech’s hype and create the future we want. HarperCollins.
Council of Europe. (2024). Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law. https://www.coe.int/en/web/artificial-intelligence/convention
European Parliament, & Council of the European Union. (2024). Artificial Intelligence Act. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689
UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. https://unesdoc.unesco.org/ark:/48223/pf0000381137
UNESCO. (2024). AI Competency Framework for Educators. https://unesdoc.unesco.org/ark:/48223/pf0000389399
Zagalo, N. (2025, June 10). Beyond the AI Hype: A critical look at techno-solutionism. Mirrors of Thought. https://mirrorsofthought.substack.com/p/beyond-the-ai-hype-a-critical-look