Ursula K. LeGuin once wrote, “There is no right answer to the wrong question.” While artificial intelligence might miss the nuance, human readers grasp the point immediately: addressing the wrong premise leads nowhere. This philosophical misstep lies at the heart of ongoing debates about whether AI “learns” as humans do — a conversation recently rejoined by Professor Nicholas Creel.
The truth, as outlined in Erik J. Larson’s The Myth of Artificial Intelligence, is simple: it doesn’t. Artificial intelligence lacks the capacity for genuine novelty, struggles with uncertainty and incomplete information, and operates without empathy or true understanding. Unlike humans, AI systems cannot extrapolate truths from limited inputs; they can only replicate and recombine vast quantities of existing data.
Even Microsoft’s own AI tool, Copilot, concedes this difference. When asked whether AI reasons like humans, it responded:
“AI relies on large datasets and algorithms to make decisions and predictions. It processes information based on patterns and statistical analysis… It doesn’t have intuition or emotions influencing its decisions.”
This kind of pattern-based processing is a far cry from human cognition. As Moiya McTier of the Human Artistry Campaign explains, creativity arises from more than mere data analysis. It is rooted in culture, geography, family, and lived experience — elements that shape and define individuals. AI, by contrast, generates outputs devoid of this human context.
So yes, AI “learning” is fundamentally different. But for those working in industries already impacted by the rise of generative models — artists, musicians, writers — the real issue isn’t whether AI learns like we do. As LeGuin suggested, the better question is: What does AI actually do? And is the cost of that worth it?
To develop large-scale AI models, companies must feed them vast amounts of copyrighted material — works that are copied, modified, and redistributed across networks. These actions involve three exclusive rights granted to authors under U.S. copyright law: reproduction, creation of derivative works, and distribution.
Traditionally, anyone wishing to use copyrighted material at scale would need to license it. However, most AI companies have sidestepped that process, using protected works without permission. In effect, they’ve unilaterally decided that the price of others’ intellectual property is zero.
The consequences are significant. This approach undermines the value of global creative output and threatens real jobs and livelihoods. Beyond the economic damage, there’s a broader cultural cost: a future flooded with derivative, repetitive content instead of original, groundbreaking work. AI models, no matter how sophisticated, cannot break creative molds — they merely reshape existing ones.
Yet many in the tech industry seem determined to humanize these systems, promoting AI as quirky, lovable robots that “learn” like children. This narrative reframes copyrighted material as harmless “training data,” and mass copying as a natural function of machine learning — rhetoric designed to obscure the reality of large-scale copyright infringement.
This sanitized image of AI is a fairytale — one that benefits trillion-dollar corporations and their backers. It downplays the ethical and legal implications of what’s happening: a quiet but far-reaching exploitation of creative work, repackaged for profit without consent.
That kind of deception might confuse an AI. But human beings — especially the ones who’ve built the culture and creativity being mined — aren’t so easily fooled.