Why Generative AI Angers Artists but Not Writers
ChatGPT is more popular than AI art, so why aren't writers reacting?
I share the widespread belief that generative AI will impact all kinds of office jobs. Artists, writers, coders—and anyone who falls under the “white-collar” label—are in the danger zone.
AI can draw beautiful images, write decent prose, code clean programs, and everything in between. Even if it doesn't work flawlessly, it’s got the potential to disrupt knowledge and creative jobs—soon, a human using a state-of-the-art AI system will be worth an entire team of people.
ChatGPT sounded the alarm for everyone. You and I became aware well before that—generative AI had been in the making for several years when the popular chatbot was released—but most people didn’t know. Today, the news is everywhere: big changes are coming.
I still know a few people who don’t even know what “AI” stands for but every day more people realize that generative AI is an inflection point. Some workers see the upsides for progress and productivity and perceive AI systems as a means to enhance their abilities.
Others don’t. Others resent technological progress to the extent that it threatens their way of living. They despise it, and although sometimes the motivation is exclusively personal, in most cases it reflects a deeper conflict with the engine that moves the world forward—and how it doesn’t care about who it leaves behind.
What I find most surprising about the latter group is that people who belong there spread unevenly—the hostility toward generative AI is highly concentrated in a few professions. We’re going to understand why.
(Side note: Stay tuned for Friday’s article—open for all subscribers. I’ll cover the news on Microsoft’s “New Bing” and Google’s Bard.)
Artists are taking generative AI very seriously
From where I stand artists (umbrella term to include designers, painters, illustrators, etc.) seem more angry and afraid than the rest. If we accept that anger acts here as a proxy for danger (although other explanations—like artists being more passionate—could help explain this contrast), we can conclude that they also feel especially vulnerable. It’s straightforward to explain why coders are even thrilled about AI, but writers, who work under similar conditions to artists, seem strangely calm. Aren’t they threatened, too?
Since the new AI art scene began to catch inertia in early-2021, I've seen traditional artists show concern and even hostility toward image generation tools like DALL-E and Stable Diffusion many times. Some are very vocal on social media (not always with the best manners) and are taking extreme measures to fight for their rights. They’re united in pressuring policymakers to establish adequate regulation on generative AI (I support their efforts) and raise concerns about the dubiously-legal practices of AI companies (e.g. scrapping copyrighted content without permission or attribution). Lawsuits are coming, and they may or may not attain what they're looking for. But they're trying.
Yet, despite the apparent fact that AI could be an equally problematic hazard for writers, I don’t see us doing the same. When I look at my guild I perceive a sharp contrast with artists: Writers are either learning to take advantage of generative AI, dismissing it as useless, or ignoring it altogether.
I don’t see hostility or anger. I don’t see fear.
As a writer myself, my sensations match these observations. I don’t feel threatened by AI. Neither GPT-3 three years ago nor ChatGPT now did it for me. Although anecdotal in my case—I’m heavily influenced by my knowledge of AI—I think this feeling is ubiquitous among my peers.
I don’t have the tools to test this quantitatively, so don't take this essay as a scientific assessment but as an exploratory search for answers to a behavioral gap (if it is, in fact, real) between two large groups of workers who should be—at a superficial glance—similarly affected by the recent developments in generative AI.
My perception tells me there’s indeed a significant difference in the degree to which AI threatens artists and writers (on average) and we’re going to discover why. I’ll leave aside how the world regards these two groups of workers because I’d say they’re approximately mistreated to the same degree. Instead, I’ll focus on the way art is created and perceived depending on its nature.
Visual artistry is (mostly) about style
Last week I shared my hypothesis on Twitter and got a bunch of interesting responses. Most of them were variations of the same argument: Style, understood as the array of elements that comprise the idiosyncrasy of an artist, is much more salient in the visual domain than in writing.
Wired’s Will Knight argued that “you can instantly see elements of other artists style in imagery [but] it's much harder to sense that copying and remixing in text.” And Parvaz Cazi exemplified that you can easily “spot cubist influence” but would need a trained reader … to identify [a] Hemingway sentence.”
I agree with this view: Style is a fundamental property in visual art. It’s also present in writing but less essential (I’ll explain why later).
This, in turn, makes individual artists vulnerable to AI’s outstanding ability to remix the kind of data it’s been fed to create something sufficiently original so that it avoids plagiarism but not too much as to lose the feeling of a familiar style. Generative models like Stable Diffusion and Midjourney can easily regurgitate styles; using an author’s name in the prompt is enough to make every output seem like it’s got their unique touch.
However, if you pay attention—or have the sharp eye of an artist—you’d realize that generative AI isn’t that good at copying artists—it may do a superficially good job but under adequate scrutiny, weird details begin to grab your attention, breaking the spell of perfection. To an expert gaze, the results are a coarse effort at replication.
Illustrator Hollie Mengert is a great example of this. She was the victim of an attempt to reproduce her style without her consent (a Reddit user took her work to fine-tune a Stable Diffusion + Dreambooth model and shared it openly). In an interview with blogger Andy Baio, she said this about the generated drawings:
“As far as the characters, I didn’t see myself in it. I didn’t personally see the AI making decisions that … I would make, so I did feel distance from the results.”
“I feel like AI can kind of mimic brush textures and rendering, and pick up on some colors and shapes, but that’s not necessarily what makes you really hireable as an illustrator or designer.”
If AI art models’ strength is copying styles but they struggle to mimic them with accuracy, then why do artists react so passionately against attempts at “plagiarizing” their work?
The reason lies in how we perceive art rather than in how we create it. Artists aren’t more worried than writers because they believe AI can perfectly replicate something with which they identify tightly (i.e. their own style)—actually, they mock AI art quite often. What they fear is that non-artists (i.e. the majority of art consumers) wouldn’t care due to our inability to appreciate visual art to its finest detail.
To artists, generative AI may be a bust but for the untrained rest of the world, it’s just fine.
We perceive images and language differently
There’s a fundamental reason why this argument applies to visual art and not writing: Humans are less sensitive to the boundaries of right/wrong and good/bad in images than in text. The threshold for people to accept an output as “good enough” is lower for image generators than for text generators. “It’s easier for text to be ‘wrong’,” as Benedict Evans puts it:
This extrapolates across all levels of language. If I write “langauge”, anyone can tell that’s wrong. If I write “colorless green ideas sleep furiously” anyone can tell that’s semantically meaningless. This applies to entire texts as well—the weakness of language models like ChatGPT. AI writing tools have perfect grammar but make up facts in argumentative essays and couldn’t write a fantasy book chapter without failing to portray the characters coherently throughout.
In general, the elements of language (written, but also spoken, signed, etc.) are much more precise pointers of meaning than images (i.e. words and sentences are better linked to the meaning they point to than images). This implies that once ChatGPT makes a mistake, it loses me—it’s no longer believable. Tiny deviations from “correctness” matter less visually, which signals a lower threshold to consider a given piece of visual art sufficiently valid to represent what we want.
Language lives in the realm of definiteness. The link between the writer's intention and the reader's perception is direct and well-defined—univocal. That’s not always true in visual art. When I look at a painting I can hardly connect the imagery to the meaning and the intention behind it—there's a larger overlapping in people’s interpretation of words than images. The idiom “an image is worth a thousand words” is commonly interpreted as reflecting the vast richness of the visual medium but it also implies it’s much less concrete.
In conclusion, writers are safer from generative AI because, first, text doesn’t rely on style (generative AI’s strength) as much as images. And, second, the inability of AI to copy styles perfectly matters a lot with words but not with images. A low threshold to accept a given output as “right” or “good” creates the perfect target for AI generators. That’s visual art—no wonder artists are so upset.
I wonder if it has something to do with how art and writing are monetized. A lot of writing doesn't have to sell itself per piece the way art does. People write for their job (and make a salary) or write as a freelancer where they have pretty decent confidence that their pieces will get bought by publications.
Artists, on the other hand, put a tens or hundreds of hours into creating a piece on spec and hoping it will sell. Even if it's stock art, there's no guarantee that there will be any return at all on a specific piece of work.
I wonder if corporate graphic artists feel differently about generative AI than independent artists? i.e. more likely to view it as a productivity enhancer.
Alberto, you write, "in most cases it reflects a deeper conflict with the engine that moves the world forward".
As you've seen, (in perhaps too many of my comments) I consider it debatable that AI, or even the knowledge explosion as a whole, is moving the world forward. As simple example, giving a ten year old the keys to the car would not be moving the family forward. The ten year old might experience it as forward movement, but then, he's ten.
You write, "I don’t feel threatened by AI."
If you don't feel threatened by the current versions of AI like ChatGPT, ok, I can get that. If you don't feel threatened as a writer by AI as a technology, perhaps we need to hear more from you on that? To me, it seems that AI in it's current state, and where AI is likely to go, are two very different things.