32 Comments
User's avatar
Dirk von der Horst's avatar

I think the thing is that Geoffrey Hinton, Ray Kurzweil, Sam Altman, and Co. are not looking at the world with that same sense of having to accept it. They're just changing it to fit their fever dreams of mechanical immortal perfection without regard to consequences and without regard to the reality of limits. And we all have to adapt to their refusal to contemplate the limits of the world as it is.

I suppose it's one of those things where two opposite truths are both true - (a) it does not help us to try and escape the reality they've created for us and (b) it is gross on so many levels - so many levels! - that they're forcing us into a reality we didn't ask for.

Expand full comment
Alberto Romero's avatar

Yes. I agree with this. I don't like being forced to discern whether what I read is slop or not

Expand full comment
Amy A's avatar

The AI overlords will ignore the importance of consent at every opportunity.

Expand full comment
Peco's avatar

“Do I care that the oranges I eat and the wine I drink are not from the orange trees and the grapevines in my grandmother’s orchard?”

I think this isn’t quite the right question. When comparing human writing to AI writing, we are not comparing one human orchard or vineyard (my grandmother’s) to another orchard or vineyard (one owned by a stranger or corporation). In the case of the orchard or vineyard, the source of the fruit is still real: trees and vines. In the case of AI, the source of the fruit is not a human being with subjective perceptions of the world, or a soul, but a computational system with no sentience, which only creates the semblance of reality.

The real question is closer to: “Do I care that the oranges I eat did not actually grow from a real tree, but were synthetically created in a lab?” Of course, some people may still not care. Yet that is the real question. And whether our body will be healthier eating synthetic oranges, over the longer run, is the real answer to whether AI will take us to a good place, or not.

That experiment needs to still play out, but my sense is that there is a tolerable threshold for how much AI we can “process” mentally or spiritually, before it gets very unhealthy. Can we tolerate a single, well-written essay by AI Fyodor? Yes, probably, in the same sense that we can tolerate an occasional Big Mac.

But not more than occasional.

Expand full comment
Alberto Romero's avatar

Yes, I agree. I used the food analogy to draw the parallel to Lewis's insight. Poetic license; I think I got the point across. I however disagree that we can't tolerate artificial or synthetic food if it's done well. We already eat quite a bit of that (not the fast-food kind, which is not a good example). I'm not sure we can't - literally - tolerate AI slop. At least not any less than we tolerate human-made slop. The problem is not its condition as synthetic but the quality itself. Can you make high-quality stuff with AI? TBD

Expand full comment
Peco's avatar

For my part, I will keep eating as much organic as possible, both food-wise and digital-wise.

Expand full comment
Alberto Romero's avatar

Yeah, me too!

Expand full comment
Rachel Rigolino's avatar

Yes, well, as someone who also liked the essay (My grandcat is named Fyo, but that is not why I enjoyed it)---and who writes much like the AI author (!)---I also wonder about these questions (and please note your short, dramatic sentence beginning the next paragraph. Is that AI-Like prose?): One, I enjoyed the AI-written post and didn’t care about the annoying juxtaposition enough to even realize it was there. I may have had suspicion at the time (I can’t remember), but maybe I didn’t. Does it matter to me that it’s AI-written now that I know? Does it change the value I ascribed to it in retrospect? Does it change the enjoyment I had while reading it? These questions belong to the same category: If you don’t know it’s AI, does it matter? (Does it matter if you know it’s AI?)

I am not sure anymore.

I am not sure either.

Best wishes,

Rachel

Expand full comment
Alberto Romero's avatar

Hey Rachel, what sentence do you wonder whether it's AI-like prose?

Expand full comment
Jacob's avatar

It's not AI that's the problem, it's thoughtless AI slop. Even so, i've generally been biased against AI content.

Now I don't know how to feel. I liked the essay by Fyodor. It was probably the best I read this year.

Knowing that it was AI written, I want to hate it, but I just... can't. It slipped through my defences, and I generally pride myself on my ability to detect AI. Now it clearly doesn't matter at all. Slop is slop and quality is quality, independent of production.

Death of the author; if I never really cared about *who* wrote it before, why should I care now?

Expand full comment
Alberto Romero's avatar

Yeah, one AI written essay being so popular is a cold shower for many. And a wake up call as well.

Expand full comment
Ido Hartogsohn's avatar

Unsettling and heartbreaking. One of your best.

Expand full comment
Alberto Romero's avatar

Thank you Ido!

Expand full comment
Mahdi Assan's avatar

Two thoughts on this:

Firstly, I think over time it will be more and more difficult to spot AI-generated content because the AI models will, over that time, probably get better at producing it.

Secondly, I think there will still be hierarchy for content that exists online. AI does level the playing field in terms of access to the means of knowledge/content production on-demand and at-scale, but at least current models are not good enough to make up for the slop that gets generated if the user using the model does not know what they’re doing. If they *do* know what they’re doing though, then that means that that user has the taste, imagination and judgment to steer the model in the right direction to make good content. Would this be acceptable to people? Right now, many people, perhaps like Ted Gioia, would think not. But as these models continue to proliferate and get used for more and more things, people who know how to use AI effectively might be criticised much less. But maybe I’m wrong on this.

Really great piece Alberto!

Expand full comment
Giulia's avatar

I immediately recognized Fyodor from the style of his viral notes, so I never bothered to read any of his essays.

AI writing will be fed to poorly educated people, but I guess they'll get tired of it because it's too predictable and agreeable. Nothing can beat an author with real experience and skin in the game.

Expand full comment
Alberto Romero's avatar

Never read any of his notes haha that essay is, if you look for it, quite obvious indeed. And I agree: you don't beat a real author with that style of intended-to-go-viral style

Expand full comment
EAARTHNET's avatar

Giulia ,forgive my interjection, i have a phd & 3 degrees, I am considered as well educated, however I have many ‘poorly educated’ friends who make many ego-trapped academics look irrelevant! Human exceptionalism or supremacist perspectives is the bane of our species! Multi- perspective unitive positioning is our salvation, AI like many ‘inventions’ are subject to abuse, but I declare, apart from the current energy load, (which could be resolved) AI if it's corpus, by us is, rooted in the commons rather than corporate greed, can be exciting, so we can be Luddites or embrace with a unitive perspective! That's how I see it but I am sure other at differing points of the spiral will have reservations, its an exciting world if you put the planet first in its embrace.🤷

Expand full comment
Giulia's avatar

I'm not sure what your point is here. AI is exciting, I agree. But not generative AI.

If you read fiction, it's because, in a way, you believe a story can help you in an entertaining way. But I'm not reading a story generated from a machine that doesn't have the concept of depression, trauma, or sacrifice. And I hate that AI enthusiasts are just lazy people who want to be rewarded for being smart.

"Work smarter, not harder." This Proto LinkedIn sentence must die.

My favorite short story is Ted Chiang's "Story of Your Life." Do you know how much it took him to write it? Five years. FIVE YEARS.

I'm reading all of his stories now because I respect that. We need time to mature ideas, to experience things, to move on, to analyze, to hate, to love, to understand, to forgive, to research, to study, to be wrong.

AI-generated content will never do that. You prompt, maybe you edit because you think you're smart, and then hit Publish. Why should I read it? I value my time too much.

But hey, embrace the change and read whatever you want.

I will read stories from Ted Chiang, Dostoevsky, Philip K. Dick, and Ursula Le Guin.

Expand full comment
EAARTHNET's avatar

Ah! Cross purposes, AI will not replace human fiction, I have an extensive library that I will not waste time in listing, suffice to say l cannot envisage wasting time reading a fictional novel by AI, personally,

I would not however, judge those who do in any way that is dismissive of their personal enjoyment.

on the subject of AI I am concerned with non-fiction.

On the subject of writing, many luminaries took far longer than 5 years!

Also the motivational reasoning for those embracing something you personally find egregious or ‘lazy’ is a tad unfair. although I do understand a defensive stance made by many who criticise others. Academic elitism is what I tend to find lazy! Rgds Dr Neil Netherton

Expand full comment
Dirk von der Horst's avatar

I WILL ALWAYS BE A LUDDITE.

Expand full comment
EAARTHNET's avatar

I understand, a broad church is to be accepted.

Expand full comment
Julrig's avatar

Whats the opposite of AI slop? Or is it all to be considered AI slop?

Will you refuse a cure for cancer if it is created by AI?

Expand full comment
Alberto Romero's avatar

Fwiw: I hope *no one* refuses a cure for cancer whatever its provenance. If AI can accelerate a universal cure, then I will be the first to welcome it. I wonder why nuanced essays are generally taken as extreme in the opposite inclination of the reader.

Expand full comment
Alberto Romero's avatar

How did you get to that question from this essay? lmao

Expand full comment
Julrig's avatar
1dEdited

Yeah - apologies, was a bit of a blunt reflex. Jumping to an extreme case like curing cancer does not rely on cultural rootedness, authorship, or shared meaning in the same way i believe you were trying to express that art, language, and literature does.

I think where I land is slightly adjacent rather than opposed. For me, slop is slop regardless of whether it’s AI or human.

100% agree on the removal of the artical from the best essays list but I was disheartened by the harsh judgment of yourself for not picking it up. Maybe 'rootedness' is more in the area of the reader’s own experience rather than just the creator?

Expand full comment
Peter W.'s avatar

My suggestion is "original thinking". Basically, a modified take on the Turing Test. IMHO, if an AI could produce writing, images, music that even seasoned experts in the respective field cannot identify as "AI slop", it wouldn't be AI slop anymore.

Expand full comment
Ted's avatar

Distilled to its essence, forcing the "AI" business model into the economic matrix, is incremental utilitarianism.

It would not have proceeded to this point unless it sequestered sufficient exchange value to provide incentive for unearned income.

The only use value it adds, is to increase lethality, to kill. The rest is displacement; founded in theft and inherently kleptocratic.

Yes, all must adjust to reality. Obviously, such adjustment includes awareness of who and what seeks one's disadvantage and premature demise.

Expand full comment
Chris Schuck's avatar

These are all important questions you raise, but - and forgive me if this is naive, I'm no expert - can we really be 100% sure the Fyodor essay was AI written. Granted it has the typical red flags and may well be AI, but isn't another important question the risk of false positives as we become more familiar with these typical signs, and trust less and less that suspicious cases could potentially be human? (Not to mention people beginning to unconsciously mimic AI style over time). I don't doubt that the essay was probably AI, given your expertise - but it concerns me that not a single person here even raises the hypothetical of a false positive.

Expand full comment
Alberto Romero's avatar

Oh, fair question and my bad for not clarifying: it was not my expertise that revealed it to me this time. Fyodor himself admitted it. I just never checked his bio (he says he uses AI to write and then takes over the pieces to polish then). Upon reading a second time that essay, the signs of AI presence are pretty obvious, though.

Expand full comment
Chris Schuck's avatar

Oh, thanks so much for responding and clarifying - this was really helpful! To be clear I'm on board with the general concerns you voice here. I'm just concerned about the general erosion of trust and occasional false positives that will inevitably come with all this.

Expand full comment
Leonidas Tam, PhD's avatar

"What AI, like Mary, doesn’t know is what seeing a color is like." Isn't this rather like John Searle's Chinese Room analysis?

Expand full comment
EAARTHNET's avatar

Alberto, hi, great discussion, I, as a declared supporter of a transformed AI, which will become augmented, is like staring over the cliff, the handrail is simple & effective, if it's good does it matter whether it's construct is from the mind of one( a distillation of numerous sources) or that of a corpus of training, AI is a colleague or a slave & lazy engagement produces western bias due to its corporate loading, but dig deeper & the true potential is exposed. I declare, for my sins, a site, https://aicommons.carrd.co/ with a manifesto, I might be totally on the wrong path but it's worth a bash, 8 billion souls is our Commons. Please just read the manifesto. It's unitive thinking as I think you are at, not siloed, as many detractors are. But great article & thanks for the opportunity to add my little bit! Rgds, Neil...editor, eaarthnet .🙏

Expand full comment