It’s AI, so I Didn’t Read
Ladies and gentlemen, we have a new hot term: AI;DR
Hey there, I’m Alberto! 👋 Each week, I publish long-form AI analysis covering culture, philosophy, and business for The Algorithmic Bridge. Paid subscribers also get Monday news commentary and Friday how-to guides. I publish occasional extra articles. If you’d like to become a paid subscriber, here’s a button for that:
Here’s your free essay of the week.
There’s a new hot term making the rounds that perfectly captures the spirit of the age: AI;DR, which stands for “AI; didn’t read,” a mutation of the venerable internet shorthand TL;DR (”too long; didn’t read”). The semicolon, which in the original separated cause from effect—the more you write, the less I read—now separates the machine’s output from your refusal to dignify it with your attention; quite an appropriate change given that we don’t have any left.
I find the acronym poetic in the way only minimalist poetry can be, because it manages to compress into five characters not one but two civilizational shifts: one is that we have gone from a world where the obstacle to reading was the length of the text to one where the obstacle is the suspicion of a lack of human involvement, which is to say we’ve gone from “I won’t finish that” to “no one started that.” The former assumes one is responsible for one’s limitations, whereas the latter urges one to externalize one’s responsibility.
I think this is a bad thing because you can’t always tell. It is as easy to be fooled by AI-generated text as it is to doubt a human’s. The effective impact of AI;DR, as well-intended as its true purpose might be, is in line with the times: we’re not short on excuses to read less. (If you’re still up for it, just take a look at this chart here.)
The other shift is related to that: None of this ultimately matters because the coming generations don’t even know how to read. We are entering the post-literacy period of history. In this sense, the AI;DR movement arrives just on time to pair up with its exact inverse: WF;AI, also known as “write for the AIs.” The SF tech class is already approaching their work with an audience made of AI agents in mind.
Isn’t it the best of luck that right at the time humans are reverse-alphabetizing themselves, a new species comes along with an unquenchable hunger for multilingual tokens? So write away!
Jokes aside, the instinct of AI;DR is profoundly reasonable: if some human text resembles AI style enough to be mistaken, then it probably doesn’t deserve your time anyway, right? By virtue of existing, AI has set a threshold we should have set to ourselves a long time ago; “garbage in, garbage out” doesn’t apply only to bots! Or, to use the modern coinage: slop is slop whether it’s made of silicon or carbon.
I say this as someone who writes for a living about AI and who has therefore been on both sides of the accusation. I have made risky claims, moved solely by my distinguished skill to catch AI prose. But I also have been vilified by people who, still adhering to their old TL;DR habits, decided not to read ‘til the end. Their loss because—full disclosure here—I have adopted the policy to reveal AI assistance at the end, especially in experimental writing, to prove my point that “nothing is what it seems” and “you can never be sure.” Such is the era of AI.
To let new readers in on the joke, here are some examples that you can try: “AI Is Missing the Point,” “The Ghost of the Author,” and “The Truth Behind Moltbook, Revealed.”
(I suspect, by the way, that those readers who slander me arrived at their conclusion using some sort of AI detector, which I consider the most hypocritical piece of technology of the last century, and its users the most incoherent: how much water are you willing to waste only to denounce water wasting? Anyway.)
But I get the sentiment. Why would anyone spend twenty minutes of their increasingly besieged attention on a text that cost its ghostly author forty-five seconds and an ill-crafted prompt? (More people should read my guide on how to humanize AI writing; first lesson, be literate.) There is an implicit social contract in reading, as in everything we do—I give you my thought, you give me your time—so when one party automates their end of the deal, the other party feels rightfully swindled. And because offloading reading makes no sense (for most people), they simply don’t.
AI;DR is, thus, a targeted protest against the absence of effort and intent.
What intent remains in an Aeon essay filled with weird juxtapositions—a sign of AI writing—whose thesis is precisely that people are victims of digital platforms’ deliberate design to degrade “our capacity for sustained thought” through screen time-maximizing algorithms. What intent remains, I say, but the sheer meta-irony of letting a tool that’s partly at fault for the very cataclysm it denies to write in his stead? Is it letting AI argue for us how we keep our capacity for sustained thought? One could imagine that’s not the takeaway the author intended.
What intent remains in a New York Times “Modern Love” column about motherly grief where the writer leaves gems like, “Not hate. Not anger. Just the flat finality of a heart too tired to keep trying,” right after her son told her, “I don’t love you. I never want to see you again.” We’re not here to judge people’s private lives, but you must be a very busy person if you were as unavailable to live the tragedy of your life then as you are to write about it now. The column was titled “I Was Deemed Unfit to Be a Mother,” but I no longer wonder why.
People are pretty clear about the source of their hate: the moment they know some article or essay has undisclosed AI in it—the moment they smell the scent of wasted water—they are unforgiving. I can’t blame them, really: the writer-reader contract doesn’t require words to emerge from struggle—Dostoevsky wrote some of his best stories in one afternoon, and Nietzsche didn’t like to edit his essays, and yet I’ve never seen anyone dismiss their work on these grounds—but it does require being paid off on intention.
But let’s go deeper. Something truly profound hides behind the AI;DR movement and behind the performative refusal to engage if there’s “no intent” backing up the text: “What should I do in the face of the unknown?” That is the fundamental question people are facing.
People’s typical answer to this is twofold: fight and flight. That is, turn AI;DR into a threatening weapon or, failing that, into a protective shield; use AI;DR to strike back or to not be struck. However, as ethically-sound and socially-progressive protests usually go, AI;DR is almost entirely correct on the ideation aspect—the unknown should be dealt with—and yet severely lacking in terms of the execution.
It fails as a shield—as a sort of epistemic filter—because the people most likely to invoke it are also the least equipped to apply it. The ability to detect AI-generated text is roughly associated with the ability to generate it in the first place, meaning that the best at protecting themselves from the AI siege are those participating in it. (I’ve wondered for years why people who refuse to even touch AI tools consider themselves matter experts?)
And it also fails as a weapon in the form of a boycott. The logic of AI;DR assumes that refusing to read AI-generated text will, through some metastatic mechanism of collective pressure, discourage its production the way refusing to buy fast fashion or fast food is supposed to discourage sweatshops and McDonald’s chains. But fast is, in this Year of Our Lord 2026, synonymous with good.
The seas will dry up, the mountains will flatten, and the skies will burn before the grifter on duty stops generating text with AI because “faster does not always mean better.” Do you even know the human species?
Vauhini Vara reports for The Atlantic on a recent example of AI;DR’s failure as reflected in the fact that the biggest media outlets use much more AI in their writing than they disclose, and yet readers keep reading:
Wondering how common AI use really was, Russell and six other researchers set Pangram [an AI detector] on thousands of articles, and found that it flagged likely AI use across the U.S. press—including in the opinion sections of The New York Times, The Wall Street Journal, and The Washington Post—suggesting that writers are turning to AI more than their readers might believe.
The only useful heuristic at your disposal is this: if you read anything that came out after 2023-2024, the chances are high it contains a healthy dose of AI slop. I am really sorry that is the best I can offer you. It’s no consolation. However, you can use it to go back to the classics. Petrarch loved Cicero intensely because he could not read much else. Maybe we should be like Petrarch: refuse the slop and enact a new Renaissance.
Deliria aside, this is a reality we must grapple with.
We are at a crossroads in terms of the relationship between a text and the reality it encodes; the greatest fiction is just like reality. I say more: if you aim at capturing the truth, you may have no chance but tap into the realms of the imaginary. There’s no reality that can be understood—or withstood—without fiction. Such is the weirdness that awaits us after the transition. But we are also at a crossroads in terms of the relationship between a text and its origin. The Author, as that entity that signs by name, has been dead for a while—at least since Barthes called his demise—but it’s a more serious matter to realize that it was never alive.
How clever is that, right? And yet, wrong. To the extent that AI is being used to write, one can not so much argue that no one is doing it as argue that everyone is. You surely remember, from the many times critics have dealt with the “plagiarism vs inspiration” question, that ChatGPT can be characterized as the amalgamation of everything that has ever been written online. In that sense, every AI word doesn’t have zero authors but infinitely many.
So we’re back to the pre-modern, pre-printing press society when scribes were not authors but copyists, or even back to the oral tradition, when anonymous bards sang songs from city to city—inadvertently building the entire cultural infrastructure and scaffolding of modernity—in an exercise of collective selfless authorship. What is AI writing, beloved reader, but a collective act of self-less authorship?
I will now let you ponder that question.
To those of you who have read this far without adhering to those TL;DR trends that are so in vogue in our illiterate society, I offer two gifts.
First, I gift you a genuine thank you for your time. This should be a compulsory clause in writer-reader contracts. We have grown so accustomed to wasting time that we’ve lost the manners of being grateful when someone lends us theirs. Time is, just like youth, invaluable. So I thank you.
Second, I gift you my honesty.
The real reason AI;DR won’t work is not that it’s a broken shield or a blunt weapon but that you, although generous with your time, are not as good at being committed. I hate hate doing this to you—I hate it with all my heart, beloved reader—but we all need to learn this lesson one way or another, so we can avoid making the same mistake in the future. The future will be more chaotic, and you don’t need extra chaos inside your mind, so an epistemic shower is in order. You say AI;DR, and I hear you, but there is no AI;DR: it won’t work, for you remain unable to tell apart a soul leaking itself through the spaces between words, from the cooling water that leaks off of the GPU racks in a datacenter. To a degree you’d rather not know, I am AI. And you did read this.





Brilliant, per usual. What a wild reveal at the end!
I wrote something similar: “There is no test for AI contamination. There’s no blood sample for whether the sentence you just wrote was yours or whether your favorite author has now become a purveyor of slop or whether you’re unconsciously reproducing a pattern you absorbed from a thousand ChatGPT outputs.
The asterisk need not be applied to your work. It’s something that now exists in the atmosphere. You breathe it in whether you use the tools or not. And, like radiation, repeated exposure slowly kills you.”
More: https://www.whitenoise.email/p/the-ai-asterisk
Increasingly I take turgid human-written pieces and just get AI to summarize their key arguments -- which should not, of course, be necessary now that journalists no longer have to fill print columns.