29 Comments
User's avatar
Elaine Benfatto's avatar

Using AI to help write is a "garbage in / garbage out" situation. I work closely with AI to analyze information and to debate viewpoints (it's good at stress testing and finding gaps in reasoning). Generally speaking, the default AI output is verbose and mediocre -- it doesn't pause to look for that one juicy word to evoke a concept when I can use three bland but safe ones to denote it.

But every once in a while, in the midst of a discussion on aesthetics or philosophy, the AI puts out an absolutely stunning sentence that stops me in my tracks. There is a balance of clarity, economy, and grace in the words. I've asked the AI how it came to that phrase, why did it pick those tokens in that order in that moment. And the answer invariably is something like, "I'm mirroring your way with words. You created the context in which these words made the most sense."

I think there's a different way to use AI for writing besides simply feeding it prompts and watching it spew out words. You have to build more than the default context. You have to model what you want it to do. The rhythm, and the level of nuance.

So, I'm not surprised by either of these two previous articles by the author. It sounds like he has learned to "play" his AI like a musical instrument -- filling the space more fully, with more overtones, than simply a single voice on a stage.

Expand full comment
Alberto Romero's avatar

Yes, giving it the right context is very important. Although I will say ChatGPT's parts are, at least to me, recognizable and genuinely worse quality (that's why I'm actually hesitant to write these kinds of articles)

Expand full comment
Elaine Benfatto's avatar

I agree about the overall quality of AI-generated writing. Those surprising moments with ChatGPT are rare, and they come at the end of very long conversations. If the LLM can only produce higher quality ouput after an hour or so of research and discussion -- it doesn't seem like much of a labor saving technique for authors looking for short cuts!

Expand full comment
Matt Kelland's avatar

I think there's a danger with referring to everything written by AI as "AI slop". There's a big difference between using AI to co-create a piece of work, carefully guided by using well-written prompts, and then revised until it expresses what you were looking for, and using AI to mass-produce vast amounts of "content" with little to no thought or human involvement.

The former, I contend, is not "AI slop", even if the words (or some of them) were generated by an AI. This piece - whether you wrote it, an AI wrote it, or you wrote it with the assistance of an AI - is interesting, well-written, and obviously had some thought put into it. I don't hate it.

On the other hand, "write me twenty 750-word blog posts about 1980s hair metal bands" and then publishing them all with no revisions, most certainly is AI slop. And it's perfectly okay to hate that.

Expand full comment
Alberto Romero's avatar

I agree. It was to make a point. I like the standard definition of AI slop, which is something AI-generated that went wrong or was lower quality than intended.

Expand full comment
Tom White's avatar

Everyone wants the nutritional content of broccoli alongside the sweet taste of beignets. We are what we eat and if we keep gorging on slop, we have more to worry about than our waistlines.

Expand full comment
jose aguilar jimenez's avatar

Great text; thanks a lot.

Expand full comment
Alberto Romero's avatar

Gracias Jose!

Expand full comment
Diane's avatar

Hey AI! Do you worry about the human mind losing its creativity and critical thinking?

Expand full comment
Alberto Romero's avatar

I will answer instead: Yes, it's a real problem

Expand full comment
Ricardo Acuna's avatar

What a provocative experiment. Nice post. 👏

One of the scenarios that could happen, or it is already happening, is that we will lose the ability to distinguish between what was written by humans and what was written by AI. Fake news is an example, which is becoming increasingly difficult to identify the source.

Solomon Asch's experiment demonstrated that we all are vulnerable to conformity, a psychological phenomenon that leads us to believe what the majority do. Let’s assume that the majority believe that your article was written by you, and not by an AI, and effectively it was written by an AI. According to Asch’s experiment we would end up believing what the majority do. We could all end up conforming to what the AI writes, believing it to be human writing. I hope we could overcome this.

Expand full comment
Granville Martin's avatar

Interesting. But the AI left out the most obvious possibility of why people recoil, they don’t like AI bc of all the problems it creates. At least that is my reaction. Stalin was a minor poet in Georgia. Whatever artistic merit he might have produced is totally swamped by his action IRL. The AI’s essay logic would miss entirely why Tesla’s sales have plummeted.

Expand full comment
Daniel Nest's avatar

Just wait till Alberto hears about you hijacking his Substack, chatbot. There'll be HAL to pay.

Expand full comment
Clayton Ramsey's avatar

It’s an illuminating piece. I didn’t have any emotional whiplash because I care more about what the words say than where they came from, usually.

I suppose it matters the most to me when the text is presented as an expression of a human voice.

For example, an autobiographical essay or a chat in a video game that looks like another player.

Expand full comment
praxis22's avatar

many of the folks on 4chan /g have a much finer definition. They regard the stuff that is not creative enough as "slop", but stuff that is weird and twisted in some way is "true AI" or from a decent foundation model.

Expand full comment
Alberto Romero's avatar

I think the accepted definition is: AI-generated stuff that went wrong (so it depends on the intention of the human behind the prompt/instructions). However, use and customs always modify definitions. Now it's closer to low quality AI-generated stuff

Expand full comment
Owen McGrann's avatar

This conundrum is certainly Borgesian. Also, in its way Pynchonian. And could see it appear in slightly altered form in Gödel, Escher, Bach.

Expand full comment
Theseus Smash's avatar

Can you do it again but in a funny accent

Expand full comment
Res Nullius's avatar

You have this recurring theme of the end of an era of trust. Have you considered that it might just be the end of middle class privilege?

The middle class has had an incentive to believe the things they are told about how the world works. Meanwhile, the ruling class and the working class have always known it's a rigged game. No disillusionment there.

Also, there was always plenty of slop being produced before AI - just look at Hollywood. It's simply the result of prioritising profits over art. AI just alters the relative volume.

Perhaps it will force us to adjust our filters? Building our own word-of-mouth networks of trust, rather than relying on some authority to broadcast The Truth.

By crowdfunding you, we've altered your incentives. By getting to know you in particular, we can place your motives in some context. I don't care which of your sentences were written by AI or not, any more than I would care if they appeared on a screen or were printed on paper. I evaluate them on their own merits - meaning, aesthetics, wit et cetera.

I read some substacks which are openly written entirely by AI.

Expand full comment
Alberto Romero's avatar

Admittedly, I wouldn't publish anything written by AI without my explicit green light. The AI slop problem is that the process has been automated completely. That's different than in other eras or technologies. Besides, it's as you say, me being here provides this article with the necessary context, whatever the extent to which the AI participated in the experiment.

Expand full comment
W.P. McNeill's avatar

How did it make me feel when I realized you were doing (or I guess pretending to do) the “that intro was actually written by AI!” switcheroo that every tech journalist was doing about two years ago until it got overplayed? Mildly annoyed.

Expand full comment
Alberto Romero's avatar

I bet you can't even tell what part is ChatGPT and what part is not hahah

Expand full comment
Alberto Romero's avatar

Lmao

Expand full comment
Marcel McVay's avatar

That was fun!

Turns out, I love a good deceit.

This flavor is interesting—artificial but not like watermelon.

—another AI reply. or not.

Expand full comment
Alberto Romero's avatar

Hahah not AI, your ChatGPT impression needs work!

Expand full comment
Marcel McVay's avatar

I’m really not sure how to take that 😂

Now that’s a feeling I’ll need to think on over coffee today. I feel Turing-tested. Appreciate you!

Expand full comment
Roi Ezra's avatar

I felt this. And I’ve been trying to write the same thing from the other side, not through trickery, but through slow self-revelation.

My theory is that we don’t love AI slop.

We love recognition. And AI has learned to reflect us back to ourselves better than we can hold a mirror.

Expand full comment
Res Nullius's avatar

Generative AI is like pop music - it reliably reproduces the middle of the distribution, so of course it's popular. We enjoy pop music without necessarily believing it to be great art. And, despite constant warnings and worry throughout history, that each new technology will mean the End of Art, great art still continues, somehow, to be created.

Expand full comment
Alberto Romero's avatar

I agree with this. AI is the latest of examples that reveal the average person has average taste (not saying an above-average person can't enjoy average stuff from time to time)

Expand full comment