The Three Best Pieces of Writing About AI in 2026 That You Must Read Right Now
They will teach you everything you need to know about the world we're living in
Three generational pieces of writing about AI have come out in the last month. If only for that reason, 2026 promises to be a great year.
You should absolutely read them. If you want to keep up and be ahead of the times, but also if you don’t care. They should be mandatory reading at school, at the office, and in Congress. You should also know that they are all fiction, but, as they say, fiction teaches more about life than reality itself. This quip proves true for all three of these long texts (taken together, they amount to a novella-length commentary on the state of AI, which is long enough that by the time you finish reading it, it’s already obsolete).
Despite their acute hits at the industry—they are dystopian, but for completely different reasons—they all went viral. I’m no longer sure if virality is a by-product of masterful attention-seeking and a willingness to click-bait your readers or if it’s an intrinsic feature of our 21st-century world: whatever you write, if crazy enough to portray the current state of affairs—especially if you add a sprinkle of made-up names—it will have a decent chance at virality.
I have not yet named the pieces because I don’t want you to leave mine too early—at least not before I have had the time to warn you of what will happen to you if you read them (there will be no turning back to your innocent life). I’m not worried that you’ll go look them up either way because no one has the mental stamina to read 20,000 words (half of them AI-generated, no one writes that much by hand anymore). So here you go, in summary form.
The first is Matt Shumer’s “Something Big Is Happening,” which amassed north of 80 million views on X, bringing together the entire timeline in a miracle not seen since St. Paul walked the paths of the old world proselytizing for Christ’s word (the godlike reach of Shumer’s essay can be explained by the fact that the title allows every single person to project the biggest thing in their lives right now as the thing happening to everyone else: we all have main character syndrome, especially myself).
The second is Citrini’s “The 2028 Global Intelligence Crisis,” a financial variation of the take-off scenario where AI ends up doing everything that doomers pundits and industry leaders have been warning of but instead of killing everyone it stops at killing the $13 trillion mortgage market (because of course, that’s the most dramatic thing that could happen if you’re a financial analyst). I read it until I reached a point where they give three examples of SaaS firms that could be affected—“Monday.com, Zapier, and Asana”— because when I asked Claude 4.6 Opus about the “SaaSpocalypse” two weeks earlier, it gave those three exact examples to illustrate its point. It might be a coincidence, but stochastic parrots are usually more parrot than stochastic.
The third one, and a personal favorite, is Sam Kriss’s Harper’s “Child’s play,” a retelling of Kriss’s experience among some of the most idiosyncratic personalities of the San Francisco tech scene. This is the last one chronologically and, in literary merit and arguably historical value, the best of the three. Kriss, unlike, I presume, Shumer and Citrini, is a veteran in the sport of disguising fiction as non-fiction—worthy heir to the Borgesian style, although perhaps born at the worst time possible now that everyone seems to be shamelessly copycatting his schtick—which is apparent from the fact that, among the three texts, his is the only one that feels real. His thesis is something I imagine everyone agrees with: obsessing over being “high agency” and living your life as a means to an end is, ultimately, a relentless run-up for a date with death.
It is interesting that, right when there’s a growing worry of misinformation and disinformation and AI-generated writing passing as human writing and people losing critical skills (cognitive surrender, cognitive offloading), and outright forgoing their literacy, and when AI is already blurring all the lines between the territory and the map, and the map that’s the territory—effectively enabling a post-truth existence—the three most viral essays of the last months fail to label themselves as fictional.
I’m not surprised to find such a blatant attack on the basic institutions of modernity (namely, rationalism) in X (Shumer) or Substack (Citrini), but I didn’t quite expect it from Harper’s. (I’m sorry to inform you that Sam Kriss’s style is such that what sounds like fiction is actually real and what sounds real is actually fiction, and more often than not, there’s an ocean of deep researched erudition with only one small detail that’s made up, but it happens to change the entire story: so, not even Harper’s editor might know for sure.) Next thing you know, having fallen the last bastions defending the values of the Enlightenment, you’re travelling to the places where the news happens to check firsthand if what you read online is real—Hume would be proud—or worse: you find yourself scouring the libraries of the ancient world for first editions of classical works of literature—maybe a Chaucer or a Boccaccio, whom Kriss handily name-drop as recommendations for the AI elite—just to make sure you’re reading actual fiction rather than being exposed to counterfeit facsimiles of reality or apocryphal palimpsests imbuing you with a false sense of security.
And yet, I’m genuinely happy that those pieces were published. On the one hand, they give me a good reason to write this and warn you. On the other hand, the articles, fictional as they are, are a great picture of the world as it is, in the sense that they talk about current events and also in the sense that they incarnate the risks of an AI-distorted reality: what can you actually believe? (And also, they are, at the very least, worth skimming to be on the joke and save yourself from the “vagueposting.”) But the reason that makes me happiest is that I realize most people have taken them as true because, as Mark Twain once predicted, truth has become stranger than fiction.
I don’t blame them for lying, though. Kriss does it intentionally for a living, blurring the line between irresponsible misdirection and artistic artifice. I personally enjoy his dexterity in threading and interweaving both matters and anti-matters so elegantly, essentially tracing counterfactual braids as his signature craft. But even Shumer and Citrini—who to this day insist their work is genuine, honest, and fact-based—are not quite reproachable. Perhaps they did try not to lie. Perhaps they did have the best intentions and our best interests in mind: I don’t blame them either way because predicting the present is today harder than predicting the future once was. No one has a clue what the hell is going on. We only know that “something big” is happening.
Making predictions was once child’s play, to use Kriss’s appropriate title, when the world had the decency to work under a coherent, self-perpetuating set of rules. It’s trivial to say that after 1, 1, 2, 3, 5, and 8, you can only get 13—at least if you’re moderately knowledgeable on italian math geniuses from the medieval period—but when the numbers are thrown around at random and there’s no legible sequence to pattern-match, it gets rather tricky: the AI era is like that, tricky because the rules under which you could create a consistent model of the world change by the month and because the constraints those rules obey are no longer cognoscible for us petty humans. Two years ago, choosing to be a programmer was the best chance to be on top of the AI frenzy; today, Claude Code and GPT-Codex have already automated you.
If it were only the rules that were changing at a quick pace, we’d struggle but survive the transition falling forward. But alas, AI had to be a “recursively self-replicating” spoiled machine. One that’s not content with imposing, by its mere existence, a rapid update cadence onto us feeble humans, but one that’s only satisfied by improving itself from the guts out, making its next iteration more capable at disrupting our customs and habits and beliefs and plans, and, well, the very ground we stand on.
AI is not just an earthquake to a truth-based society like ours, but an earthquake that itself produces deeper, more powerful earthquakes. The result is that the best pieces of writing—the ones that nurture people’s need for certainty—are total fiction, and, at the same time, the only kind with an actual shot at capturing the future.
I myself tried this strategy with my work on Moltbook, a social network for AI agents where “humans are not allowed but welcome to observe.” Moltbook went viral for a couple of weeks, until it was discovered that most of the agents’ exchanges were directed by humans under the hood with phishing intentions. Moltbook was itself a fantastic work of fiction, passing itself off as real so believably that it convinced some of the smartest people I know, who, coincidentally, I no longer consider among the smartest people I know.
But I understand their confusion because, to throw more wood on the pyre of our fading shared reality—a pyre that Shumer, Kriss, and Citrini ignited more than I ever could—I wrote my own story about Moltbook (Baudrillard, rather than Hume, would be proud this time). I titled my fiction story, which I disguised as a filtered secret document, which I disguised as a journalist’s news article, “LEAKED: The Truth Behind Moltbook, Revealed.”
I’m equal parts proud and ashamed to say that plenty of people wrote to me asking to confirm the events recounted in the text, to which I could only respond that the text was leaked to me and I could neither verify nor reveal the identity of my anonymous source. Another bunch accused me of making it up because they “had Googled it and found no information about the end of Moltbook.” I am proud because they were partially tricked, but ashamed because the work was not fine enough to make them believe it at face value. Unlike Kriss, I’m not yet versed in walking both forking paths at the same time.
Unsurprisingly, my Moltbook piece went viral anyway. It was a fake tale of a very real possibility, and a terrifying one at that. Just like Kriss’s, Citrini’s, and Shumer’s writings. It’s funny because all four couldn’t be more different: style, topic, tone, quality, and valence—Kriss is not fond of AI, contrary to the others—do not match. The only thing they share, which is perhaps their most important feature, is that they prove and otherwise unsettling fact: our world has no more room for pure truth insofar as that truth is not willing to concern itself with the dark arts of magic realism. You are either delusional enough to take a chance at writing fiction or already falling behind.
I’m compelled to confess to you my most profound insight today, which led me to write this piece that you’re reading right now (which is, itself, non-fiction, and will thus be forgotten to time): I’ve realized that the world is at a juncture of the rarest kind imaginable; the literary kind. Turns out that all of history until now recounts the past to the same degree of faithfulness that all of literature until now foretells the future. I cannot, of course, back my claim: only the future can prove me right or wrong.
We’re barely living through the interregnum when neither truth nor fiction rules over the other. We’re standing right on top of the fundamental boundary of reality, the only moment that really matters. Having crossed the mirror, we now start walking backward, except instead of revisiting history, we’re enacting literature. I made a joke above that, right now, the classics are the only way to read something that you know is fiction. Well, it's actually the exact opposite: pick up a copy of Chaucer’s Canterbury Tales and Boccaccio’s Decameron, for it’s not too late for you to prepare for the future.


