I Liked the Essay. Then I Found Out It Was AI
C.S. Lewis on AI writing
I.
In a letter to his friend Arthur Greeves on June 22nd, 1930, C.S. Lewis wrote:
Tolkien once remarked to me that the feeling about home must have been quite different in the days when a family had fed on the produce of the same few miles of country for six generations, and that perhaps this was why they saw nymphs in the fountains and dryads in the wood — they were not mistaken for there was in a sense a real (not metaphorical) connection between them and the countryside. What had been earth and air & later corn, and later still bread, really was in them. We of course who live on a standardised international diet (you may have had Canadian flour, English meat, Scotch oatmeal, African oranges, & Australian wine today) are really artificial beings and have no connection (save in sentiment) with any place on earth. We are synthetic men, uprooted. The strength of the hills is not ours.
I was eating meat with vegetables and rice when I found this quote on my Twitter timeline, in one of those accounts that yearn for times past—medieval kings and chivalry guilds—and, as I looked down at my half empty plate I realized Lewis, and through him Tolkien, was absolutely correct: I don’t know where the tomatoes and carrots and mushrooms come from or whether the cow whose meat I was enjoying was born and raised in some farm in the Madrid countryside. Is the rice from China at all, or is that notion Western propaganda? The coffee I drank afterward was cultivated in Brazil and roasted in Portugal; at least that I know. But I could not pinpoint the origin of the other ingredients to save my life. So, I’m not merely uprooted, as Lewis says, but unaware of where I’ve been transplanted.
I’m tempted to discharge my responsibility to the usual suspects: capitalism and modernity. However, I will own it today: I lack the strength of the hills, and I recognize that I would be more sensitive to my world if the same atoms that made the ground I stand on also made up my body. (Alas, I don’t think I will ever see a nymph or a dryad: I’m not yet old enough to go around believing fairy tales anyway.)
But that’s not why that quote grabbed my attention. Lewis uses two words that feel out of place in a letter from 1930 to my 21st-century eye: artificial and synthetic.
A century later, we’re fully uprooted in terms of our food diet, but more so even in terms of our info diet. The Western canon has not changed much since Lewis’s time—a bit of Kafka, Orwell, Borges, Pynchon, DFW, Rulfo, Bolaño, and whatnot—but who reads literature nowadays? No, I’m talking about the information we consume: it is, increasingly, as artificial and synthetic as we are. Uprooted from any relation with the world whatsoever. The more we eat shit food, the more we read fake slop, the more I share the sensibilities of those helmet-guy “the-Roman-Empire-was-the-cusp-of-civilization” Twitter accounts. And the more Lewis’s words quietly go from admonition to advice: make your peace with the world you live in.
If you prefer ancient words to convince yourself that this is the better approach, here’s Epictetus in the Enchiridion: “Don’t demand that things happen as you wish, but wish that they happen as they do happen, and you will go on well.” I’m not the most ardent fan of the Stoics, but he’s right.
II.
AI is the latest culprit, but far from the only one. However, if we intend to tackle this systemic uprooting, we may as well start from the rotting canopy.
Let’s take an example I came across today. I was reading Ted Gioia’s list of the best essays of the year, which he published yesterday. I recognized a few, like that one at The Yale Review where Bryan Burrough revealed he used to make $166,000 per article (!!!) in the good ol’ days of Vanity Fair. The one I remembered the best was an essay about—how could it be otherwise—phone addiction. It was signed by an anonymous author by the name of Fyodor (you may know him). I had referenced it myself in How to Live Without Your Phone, so imagine my surprise when the first comment in Gioia’s list, by Ruth Gaskovski, was this:
“The post by Fyodor is AI-written.”
You can imagine why, as someone who 1) enjoyed the essay, 2) prides himself in being good at spotting AI, and 3) just last week published a post entitled 10 Signs of AI Writing That 99% of People Miss and then The Death of the English Language, I didn’t quite appreciate this turn of events. (Even if you know what to look for, I will say in my defense, no one wants to be all day with the guard up.)
I clicked once again on Fyodor’s profile and, upon reading the first lines of that post—rather ironically, entitled Meditations for Phone Addicts (my friend, you didn’t meditate shit!—I recognized the usual telltale signs:
We reach for our phones not merely out of habit but out of existential dread. The silence. The stillness. The terrible weight of being alone with one’s thoughts.
The bland juxtaposition (habit vs existential dread) and the triad with the last element being a bit longer (silence, stillness, etc.) are clear cues. I don’t have a good excuse why I didn’t catch this, but perhaps a valid explanation: Maybe I suspended my analytical gaze when I read the first paragraph, which I genuinely liked (and still do):
The modern soul finds itself divided. In one hand, we hold a device—a small, glowing rectangle that promises connection, knowledge, and distraction. In the other hand: nothing. And it is this nothing that terrifies us.
I’m not surprised it has amassed north of 20,000 likes at the time of reading. People love it. Upon reading Ruth’s comment, Ted, who considered this essay among the best of the entire year, took it off the list. (As I pride myself on being AI-proof, Gioia prides himself in being well-read, so it’s safe to assume that he read a lot of essays this year, which emphasizes the fact that an AI-written one made the cut.) He wrote:
I’m removing it from the list. Even if it was trained on Dostoevsky (one of my five favorite novelists), I don’t want it here. Thanks for alerting me.
I have two reflections to make.
One, I enjoyed the AI-written post and didn’t care about the annoying juxtaposition enough to even realize it was there. I may have had suspicion at the time (I can’t remember), but maybe I didn’t. Does it matter to me that it’s AI-written now that I know? Does it change the value I ascribed to it in retrospect? Does it change the enjoyment I had while reading it? These questions belong to the same category: If you don’t know it’s AI, does it matter? (Does it matter if you know it’s AI?)
I am not sure anymore. This is the world we live in; can I wish to live in this world? Do I care that the oranges I eat and the wine I drink are not from the orange trees and the grapevines in my grandmother’s orchard? No, not really. Do I care that I can’t see the nymphs in the magic fountains or dryads in the misty forests? I wish I did, but I don’t. As I wrote in My Kids Will Fancy Generative AI, I Choose to Fight It:
I want to see the good, however faint it may seem to me now, among the bad. But it’s hard. It’s hard because writing . . . is dear to me. Just like copying manuscripts by hand was to scribes. Like talking was to Socrates. Like quills were to 17th-century intellectuals and typewriters to 20th-century typists. We can’t understand it, but that was their whole life. And the future took it from them. I, a writer of the 21st century, don’t want to lose my life. Even if I can see, with a defiance product of an eternal recurrence that betrays its own intentions, that this poison-laced gift would be welcomed by our distant heirs, who, detached from our customs while immune to the very illness that’s taking us out, will be grateful for our sacrifice.
And two: Why does Fyodor’s essay being made in collaboration with AI disqualify it automatically from being featured in Ted Gioia’s list of best essays of the year? Knowing his stance on AI, I assume Gioia didn’t take it out because he decided it was actually not that good, but because it was AI-written. I’m sure that if I asked him, he’d admit he liked it even if not as comfortably as before, knowing it was AI. The reason is simple: he doesn’t “want it here.” He is not happy with the world he lives in. I get it.
III.
I am afraid, however, that it is here. Just like Canadian flour, English meat, Scotch oatmeal, African oranges, & Australian wine were in Lewis’s time. AI is accelerating nothing as much as it is accelerating our uprooting: once we no longer know the provenance of what we read, we will belong nowhere insofar as our language and cultural tradition go. I can enjoy Fyodor’s essay nevertheless, but I won’t deny that in being stoic about it, I accept what I lose.
You are safe if you read old books, but not even new books are AI-proof: a user from the Slate Star Codex subreddit investigated the latest Hunger Games novel, Sunrise on the Reaping, and found convincing evidence that it was at least partially AI-written. No human would put together a spider web and silk regarding tactile sensation, despite being the same material; one’s sticky, the other’s soft. But semantically, they’re close, and that’s what AI knows about: semantic correlations in text.
There have been plenty of studies on this phenomenon, and every single one reveals the same category of failure mode: AI is not rooted, as we once were, in the real world; its world is language. It’s trapped in the lazy corners of the Library of Babel. ChatGPT is like Mary, “the color scientist”; it knows everything about colors: the wavelength of the light rays, why the rainbow always presents itself in the same order, how Newton discovered, using a prism, that white is the amalgamation of the others, or why mixing a color with its opposite in the chromatic circle makes it darker but not mixing it with black. What AI, like Mary, doesn’t know is what seeing a color is like.
It’s not just popular novels; the story is the same across a substantial proportion of books, essays, and articles being published today; if you don’t use AI—I don’t mean Grammarly but ChatGPT—you can’t compete in speed and cost with those who do, a dynamic I predicted in AI Writing Is a Race to the Bottom. When I say that you don’t know the provenance of what you read anymore, I mean it in the strongest terms possible; when I admit I’m not sure if it really matters, that I’m trying to make my peace with the world I live in, I accept that both things can be true at once. (Don’t worry, I won’t drive you crazy by revealing that this article, too, was written using AI).
I understand why L.M. Sacasas criticizes the idea of technological inevitability by calling it manufactured instead. I agree: rare is the prediction inevitable in nature; they’re often self-fulfilling prophecies. However, the fact that our world was not inevitable doesn’t mean it’s not real. Can we go back? Can we put the “genie back in the bottle” (to use an expression I hate)? Maybe. But from my point of view, as a tiny citizen of the universe, the most I can do is warn you that this is happening and try to develop my equanimity toward the world as it is. Only then can we tackle what is perhaps the more important question: what the world ought to be. For now, however, I leave it to dreamers, mad men, and those incapable of acceptance; you can’t build a new world without first coming to terms with the reality before you.
Lewis’s diagnosis of the consequences has transcended time: “the strength of the hills is not ours.” I agree. In the spiritual sense, this means that not belonging takes away from us the ability to perceive the world in all its magicality. A more literal reading reveals that we are not part of our immediate biosphere; we are strangers on our land: the atoms in my body are not from here. If we extrapolate this interpretation to the information ecosystem, the conclusion is that we don’t belong to our immediate noösphere. We are being inadvertently extirpated, like a tumor, from the collective layer of thought, ideas, and reason.
We belong in Jean Baudrillard’s simulacrum; we’re subjected to the map that generates what we think to be the territory. But it’s not real territory, just a good enough replica; fully “artificial” and “synthetic.” I’m uneasy to know that Fyodor is AI, but not in the sense of having enjoyed an AI-written piece; more in the sense of not having known it at the time. I’m still unsure how to let go of my need for control.
I know our epistemic customs have been under attack for decades. It’s just… feeling it myself personally made me realize, for the first time, that my feet—my roots—no longer touch the ground; a new kind of vertigo I’m yet to get used to.



I think the thing is that Geoffrey Hinton, Ray Kurzweil, Sam Altman, and Co. are not looking at the world with that same sense of having to accept it. They're just changing it to fit their fever dreams of mechanical immortal perfection without regard to consequences and without regard to the reality of limits. And we all have to adapt to their refusal to contemplate the limits of the world as it is.
I suppose it's one of those things where two opposite truths are both true - (a) it does not help us to try and escape the reality they've created for us and (b) it is gross on so many levels - so many levels! - that they're forcing us into a reality we didn't ask for.
“Do I care that the oranges I eat and the wine I drink are not from the orange trees and the grapevines in my grandmother’s orchard?”
I think this isn’t quite the right question. When comparing human writing to AI writing, we are not comparing one human orchard or vineyard (my grandmother’s) to another orchard or vineyard (one owned by a stranger or corporation). In the case of the orchard or vineyard, the source of the fruit is still real: trees and vines. In the case of AI, the source of the fruit is not a human being with subjective perceptions of the world, or a soul, but a computational system with no sentience, which only creates the semblance of reality.
The real question is closer to: “Do I care that the oranges I eat did not actually grow from a real tree, but were synthetically created in a lab?” Of course, some people may still not care. Yet that is the real question. And whether our body will be healthier eating synthetic oranges, over the longer run, is the real answer to whether AI will take us to a good place, or not.
That experiment needs to still play out, but my sense is that there is a tolerable threshold for how much AI we can “process” mentally or spiritually, before it gets very unhealthy. Can we tolerate a single, well-written essay by AI Fyodor? Yes, probably, in the same sense that we can tolerate an occasional Big Mac.
But not more than occasional.