Generating Is Not Creativity
On the Feeling of Creative Urge
Silvia Rondini’s new study on visual creativity is worth taking seriously because it exposes a confusion that has shaped too much of the public conversation about AI: the idea that strong performance in language is enough to prove creativity in general. In the study, visual artists produced the most creative results, followed by non-artists and human-guided AI, while self-guided AI came clearly last. That gradient matters because it shows that, once we move away from semantically structured verbal tasks and into open-ended visual imagination, the human–AI gap does not disappear. It becomes harder to hide [study] [interview].
This matters to me because I have been arguing for some time that we have confused fluency with creativity. Writing fluently is not the same as writing creatively. Large language models have looked impressive in verbal creativity tasks largely because those tasks often reward fluency, elaboration, and semantic distance more than originality in any deep sense. A system that can generate many plausible associations at speed will perform well under those conditions. But that does not yet amount to creative agency. It amounts to combinatory strength under favourable testing conditions. Rondini’s work matters because it helps draw that line more clearly.
This is also why I wrote earlier that AI can imitate language more easily than cinema. In that piece, I argued that language can leap where cinema cannot. A sentence can compress perception, mood, causality, and interpretation into a line. Cinema has to solve all of that through bodies, space, framing, rhythm, duration, and the cut. It does not merely describe a world; it organises a world for perception. That difference matters here too. The success of LLMs led many people to assume that all creative forms would fall in the same way, as if image, film, and visual form were simply language with pixels attached. They are not.
Rondini’s study gives that intuition an experimental backbone. What matters is not just whether a model can produce an image, but what happens when the task is open, abstract, and weakly framed. In the experiment, the self-guided system performed worst. It improved sharply only when a human-generated idea was inserted into the prompt. That is not a small technical detail. It is the conceptual centre of the result. The model did not discover the frame. It needed the frame to be given.
This connects directly to another argument I made in The Myth of Autonomous Discovery. There I argued that generative AI fills frames more easily than it builds them. Once the relevant variables have been named, once the governing context has been imposed, once the semantic anchor is in place, the model can do impressive work. It can extend, recombine, stylise, and accelerate. But choosing what matters, deciding what counts as relevant, and constructing the frame that gives the output meaning still remains, in serious cases, a human task. Rondini’s study gives this claim empirical force. The system does not spontaneously generate the orienting structure that makes creativity possible in the richer human sense. It depends on human framing to approximate it.
The phrase Rondini uses in the interview is especially important: the model struggles when there is no “semantic anchor.” That is exactly the right expression. Without a semantic anchor, a human creator can still draw on memory, bodily perception, autobiographical residue, cultural experience, and the pressure of lived reality. A human being does not need to be handed a frame in the same way because human imagination is not operating in a vacuum. It is embodied, situated, and historically formed. A generative model is different. It does not inhabit a world. It operates over correlations extracted from representations of a world. That difference is not superficial. It goes to the root of why current systems can simulate aspects of creativity without yet becoming autonomous creative agents.
This is also why I think the most important line in the interview is not the one about model performance, but the one about pleasure. Rondini says that if we define creativity only in terms of novelty and usefulness, we forget that humans like to create. That line matters because it restores what so much AI discourse deletes. Creation is not only about the object produced. It is also about the lived experience of producing it: the pleasure of shaping form, the satisfaction of bringing something into existence, the reward of feeling one’s own agency at work. This is also where psychology offers a useful lens. Self-Determination Theory has long argued that human beings are intrinsically motivated to act, explore, and develop their capacities, not only for external rewards, but because doing so is inherently satisfying. Creativity, in this sense, is not an added layer on top of cognition. It is one of the ways that autonomy and competence become visible in action.
This, for me, is the real limit of the current discourse around AI art. We keep asking whether the machine can produce something that looks creative, and not asking what kind of being creativity belongs to. A model can assist. It can surprise. It can generate artefacts that are interesting, useful, even beautiful. But that is still not the same as living the process of creation. It may simulate the result. It does not yet experience the necessity of creating, or the pleasure that humans derive from the act itself. Perhaps that is also why AI-generated art often encounters immediate resistance: not only because of ethics or quality, but because people sense that something in the circuit between form, agency, and lived experience has been interrupted.
This matters even more in education. If children create small animations, images, or stories with AI agents, the key question is not only what gets produced. It is what kind of creative process is being formed in the child. If we evaluate only the final artefact, we may miss the most important thing. Where did the idea begin? Who provided the frame? When did surprise emerge? Did the child feel authorship, or only selection? Did the system expand the imaginative process, or close it too early by supplying ready-made possibilities? These are not secondary questions. They go to the heart of what creation is in developmental terms. Creativity in childhood is not simply output. It is exploration, hesitation, invention, appropriation, and joy. If AI enters that space, what matters most is not whether it helps children make things, but whether it helps them become creators rather than mere curators of generated options.


