The Argument: Art as an Unsolved Problem
A post on the subreddit r/changemyview has reignited one of the most contested questions in the generative AI debate: is artistic creation merely a sufficiently complex problem, or is it something fundamentally different from the kinds of tasks machines have recently mastered? The original poster, who identifies as both a writer and a visual artist, frames the question in deliberately uncomfortable terms. The core premise is straightforward — large AI models have progressed from struggling with basic arithmetic to achieving competitive performance in advanced mathematics, winning awards at events like the International Math Olympiad and contributing to solutions for longstanding research problems. Coding ability has similarly advanced at a rate few anticipated. If AI can do all that, the argument goes, why should art be any different?
The poster acknowledges a lag in AI-generated art relative to these other domains, and even entertains the idea that art's complexity might be genuinely greater than that of formal reasoning tasks. But the conclusion drawn is that this lag is likely temporary. AI is characterized as a general-purpose problem-solving machine, and art — however emotionally resonant or culturally significant — is still, at some level of abstraction, a problem waiting to be solved.
The Case for AI Eventually Mastering Creative Work
Supporters of this view point to observable trends as their primary evidence. Progress in generative image models, large language models capable of producing fiction, and audio synthesis tools has been rapid and largely continuous. Those who hold this position argue that previous generations confidently identified tasks AI could never do — chess, Go, medical diagnosis, legal drafting — only to see those predictions overturned. The reasoning follows that asserting art's permanent immunity from automation risks repeating the same error.
From this perspective, what humans perceive as emotional depth or authentic expression in art may be, at a computational level, a pattern-recognition and generation problem of high dimensionality. If models can internalize enough of human experience through training data — which already encompasses an enormous volume of literature, visual art, music, and film — they may eventually produce outputs that are indistinguishable from, or even surpass, human creative work in the ways audiences actually measure value: emotional impact, narrative coherence, aesthetic novelty.
There is also an economic argument embedded here. Companies have, to date, focused AI development on domains that most directly reduce costs or generate revenue. Artistic fields have received comparatively less structured investment in benchmark improvement. The implication is that if and when commercial incentives fully align with advancing generative art quality, the pace of improvement could accelerate dramatically.
The Counterarguments: What 'Solving' Art Actually Means
Critics of this framing challenge the premise at a foundational level. For many respondents in the thread, calling art a problem already misunderstands what art is and what it does. Art, in this view, is not primarily an output to be evaluated for technical correctness — it is a communicative act embedded in specific human lives, histories, and social contexts. The value of a poem or a painting is inseparable from the fact that a particular human being, shaped by particular experiences, chose to make it and share it.
On this reading, AI cannot solve art because art is not a problem with a solution. It is an ongoing, open-ended negotiation between creators and audiences, constantly redefined by cultural change, personal circumstance, and historical moment. Even if a model produces text or images that are technically polished and emotionally affecting, the question of whether that constitutes art — rather than a highly sophisticated simulation of art — remains genuinely unresolved.
Others raise concerns about what is lost when art is framed purely as a productivity domain. The creative process itself — the struggle, the revision, the failure — is, for many practitioners, a significant part of art's meaning both for creators and, indirectly, for audiences who understand that something real was at stake in its making. If generative AI removes that struggle, it may produce aesthetically similar outputs while hollowing out something that gives art its weight.
The Practical and Ethical Dimensions
Beyond the philosophical debate, commenters also engage with material consequences. The displacement of working artists, illustrators, writers, and musicians by cheaper AI-generated content is already underway in some industries. For those whose livelihoods depend on creative work, the question of whether AI art is real art becomes less relevant than the question of whether it will be purchased and used instead of their own.
There is also a longer-term cultural question: if the majority of consumed creative content is generated by models trained on past human work, what happens to the stock of genuinely novel human experience that would otherwise feed future training data? Some argue this creates a feedback loop that could degrade creative quality over time; others are more optimistic about human adaptation and the emergence of new creative forms that AI cannot easily replicate.
The r/changemyview thread did not produce a clear consensus — perhaps fittingly, given the subject matter. What it does illustrate is that disagreements about generative AI and art are rarely just technical. They reflect deeper disagreements about what creativity is, who it is for, and what it means to say that a machine has matched or exceeded a human in producing it.
Source: r/changemyview — CMV: art is a problem that generative AI will probably solve
Discussion (0)