One of these days I read an essay by George Orwell about Rudyard Kipling—Anglo-Indian, conservative, and poet—“if we can call him that,” T. S. Eliot would say. Orwell concludes, after a sharp breakdown of Kipling’s work, that despite occupying a somewhat precarious intellectual position as a defender of the ruling power, he had one advantage over his liberal contemporaries: “a certain grip on reality.” And he adds:
The ruling power is always faced with the question, ‘in such and such circumstances, what would you do?’, whereas the opposition is not obliged to take responsibility or make any real decisions. Where it is a permanent and pensioned opposition, as in England, the quality of its thought deteriorates accordingly.
So today we’re going to talk about the most important opposition group in the modern world: the AI skeptics.
I say they’re the most important group not as a backhanded compliment, but because AI is the most important technology today and, as such, any group tied to it is the most important of its kind. That includes its skeptics just as much as its evangelists. Its creators, as well as its most devoted users (myself included). As it happens, given that AI has been advancing steadily for 80 years—first gradually, then suddenly—the skeptics are essentially a “permanent and pensioned” opposition to the enablers and their convinced followers.
Only twice on record (during the two AI winters) did the skeptics manage to reach a position of power from which to take responsibility and therefore “make real decisions.” In both cases, something similar happened: a few years after this role reversal, the AI optimists—as it’s also fitting to call them, and the term I’ll use from now on—effortlessly resumed their natural place as the “ruling power” in AI, after a symbolic redemption arc for the skeptics in that they had rightly called out the excesses, the hype, and the unkept promises.
It’s important to note that both in those two historical moments and in the present day, skeptics wield in one hand a clear-headed common sense and in the other a caution more fitting of a conservative than a progressive.
They’re mostly right in calling out certain reckless attitudes—ones that might fairly be called cult-like. To get a sample of such attitudes, all we’d need to do is visit the headquarters of the tech giants in Silicon Valley and ask researchers or executives (makes no difference) what they think about AGI and when they believe it’ll arrive. Their near-term vision of the future (and I don’t say near-term because they lack vision, but because what they see, they believe is near) is a caricature of the science fiction that preceded them (it was never encouragement, as they clearly believe, but a cautionary tale).
On this point, all skeptics agree: the quasi-divinity some attribute to a supposed superintelligence just around the corner is a sign of mental decay, of identity deficits, and of a disconnection from a reality they can’t grasp, trapped as they are in a bubble they neither want to leave nor could even if they tried.
But that doesn’t mean AI skeptics are in an easily defensible position. We should recognize that the instinct of the skeptic and that of the optimist don’t stem from the same philosophical stance toward the world. While the skeptic is concerned with the world as it is—and we often mistake this stance as the only one that matters—the optimist is concerned with imagining what the world could become; what it should become. That’s why the optimist sometimes falls—at times unwittingly, at times deliberately—into the kind of hyperbole so characteristic of the AI community.
It’s when we try to weigh these two opposing forces, whose aims cross paths without ever touching, that we start to mix things up: It’s fair and necessary to critique the hyperbole, because it distorts the public’s perception of what AI is—but it’s a serious mistake, or at least that’s how I see it, to attack those who choose to express excessive optimism about what AI might one day become. It’s a serious mistake because the future, and to a large extent the present too, doesn’t lie along the path skeptics try to block—it lies in the direction optimists dare to dream. Stop the optimist from dreaming, and the skeptic will leave you with no future to go to.
It’s here, perhaps, that I feel compelled to lay out a more accurate taxonomy of the skeptics, whom I may be unfairly grouping under a single label. Because once we’ve redefined the roles of skeptics vs. optimists as philosophically compatible, two broad camps emerge within the former, one of them viscerally resisting any glimmer of overlap: on one side, those who believe that any vision of the future solely grounded in technological progress is, as an ideological tenet, an aberration; and on the other, those who question the execution by the optimists but not their forward-looking mindset. For the latter, it’s not the act of dreaming that leads to nightmares—it’s the way some go about trying to make those dreams real.
And it’s absolutely crucial to figure out who belongs to which camp—because while the former can be easily ignored, their irrelevance often matching their ignorance of the topic they’ve chosen to object to, the latter are, if anything, the most valuable force we have to counter the fever for power that afflicts the AI enablers, worsened after so many years of decisions and responsibilities and delusions of grandeur—delusions of near-divinity. That’s why, now that we’ve arrived here—and made it clear that not all AI skeptics are equally conservative—I’m going to name names: voices that I consider especially valuable as a persevering, influential, and, above all, responsible opposition.
Melanie Mitchell, professor at the Santa Fe Institute for AI, writes often in Science and on her Substack blog. Her work focuses on questioning the premises most of us have forgotten—or chosen to ignore—in pursuit of interests not always aligned with the collective good. One such premise is that “reasoning AI systems reason.” Drawing on her ties to academia and her deep knowledge of cutting-edge research on the topic, Mitchell carefully breaks down what we mean by “reasoning” and concludes—with more restraint than the optimists, but also more hope than her fellow skeptics in the other camp—that while reasoning systems don’t reason like humans and their heuristics are poorer and less frequent (memorization dependence is rampant), they are gradually improving in areas where high-quality data exists.
She doesn’t reject the possibility that AI reasoning systems might keep improving—but, she says, we need more research and more caution when venturing far-fetched conclusions from studies with limited scope. What is science, if not a debate where truth is the only winner?
Mitchell has also explored two of my favorite subtopics in AI: what exactly we mean by general intelligence (artificial or otherwise), and why we believe it’s possible to create an intelligent system without accounting for the fact that every intelligent being we know (human or not) is embodied. Her reflections always strike a similar tone—a mix of academic caution and scientific open-mindedness. Her work is of the highest caliber, and she has no need either to dismiss already tangible realities for the world (like the fact that large language models, LLMs, are more than autocomplete systems), nor to endorse the wildest takes (like the claim that GPT-4 showed sparks of AGI, or that o3 is full AGI).
François Chollet, former Google scientist and Keras creator, is perhaps the skeptic with the most mature theory as an alternative to the dominant narrative, which claims that LLMs are the main component, if not the only one, of an hypothetical AGI, and that by scaling data, compute, and models, we’ll reach levels of intelligence beyond our own in every domain. Chollet argues—very much in line with Mitchell—that what’s missing isn’t just quantitative, but qualitative. It’s not just a matter of building bigger data centers or generating tons of synthetic data; there’s a foundational problem of an algorithmic nature.
Current LLMs are severely lacking in their ability to understand, discriminate, or generate data outside the statistical distribution of their training data. In other words, they can only process “shapes” they have already seen (shapes meaning latent space shapes). That’s a crucial competence because being increasingly skilled at some problem does not necessarily imply being skilled at the meta-task of acquiring new skills to solve new problems—that’s intelligence.
Chollet hasn’t just diagnosed, with precision, how current AI fails—he’s also convinced he has the solution. That’s what he’s working on in his new startup, Ndea, which he founded last year after leaving Google. The solution, Chollet ventures, is to combine the program synthesis capabilities of search algorithms with the natural intuition of LLMs. It’s this intuition-enhanced program synthesis that Chollet believes can bring together the best of both types of AI. He thinks it would allow systems like ChatGPT to look for answers to the problems they’re given beyond the shapes they already know. Maybe this is the formula that could lead to an AI capable of making genuine scientific discoveries—and, in doing so, contributing to our civilization in ways that, for now, only humans can.
Arvind Narayanan completes this trio of standout skeptics. His contributions to the field rival those of researchers at OpenAI or Google, as well as the philosophers in academia who take on the hardest questions. Narayanan is a computer science professor at Princeton University, but he’s probably best known as the co-author, together with Sayash Kapoor, of AI Snake Oil and the Substack blog of the same name. Here, though, I want to focus on what I believe will be his next book, which he’s already previewed for those of us who follow his work: AI as Normal Technology.
The central idea is that while AI may well be a powerful technology, both as innovation and as application, there’s no evidence to support the leap some make in treating it as the seed of superintelligence or the singularity. So the best way to describe it is through contrast: AI is normal. There are frictions—beyond what can be solved by training algorithms in compute clusters—that block any spontaneous ontological leap from “normal” technology to something historically and categorically unprecedented.
This idea of framing AI as a technology of tremendous potential and usefulness—without falling into the conceptual trap of superintelligence—points directly to both the split I drew earlier between one camp of skeptics and the other, and to the separation of both from the optimists. Narayanan, like Mitchell and Chollet, never denies the value of AI—not in its current form, nor in what it could become. They choose to be responsible opposition. They’re not the kind of skeptic that shirks responsibility—the kind that clings to ideas that may sound morally elevated in public discourse but are, in every other sense, detached from reality. Nor are they swept up in the kind of fantasies that plague the optimists, who risk undermining present efforts in pursuit of the mirage of an uncertain future.
I could write an entire book just naming those who, commonly referred to as AI skeptics, are doing essential work. Not to halt AI’s progress, but to redirect it when it veers off course. But due to time and space constraints, I’ll end this short list here.
Having named Melanie Mitchell, François Chollet, and Arvind Narayanan, it becomes clear to me just how misleading the label “AI skeptic” can be, especially if we choose to ignore the nuances that set them apart. It’s not the same to be skeptical of AI as a technology as it is to be skeptical of the way the industry—driven by financial and political interests or by a fringe transhumanist identity—has chosen to implement that technology. It’s a nuance that takes effort to untangle, but in times like these—and given that the voices of some skeptics can safely be set aside, while others are essential as a counterweight to power without undermining the value of progress—it’s more necessary than ever.
To those three—and everyone in their camp—I want to extend my thanks. They get far too little credit for work I consider fundamental. Because they exercise their responsibility, as Orwell might say, in a way we usually only expect from and demand of those who wield power and thus must make decisions. These AI skeptics are bold enough to ask themselves, “What would I do?” without taking much personal benefit from it. Because if they were right, someone within the ruling power would already be rebranding their skepticism as a tool for progress, taking the credit along with it.
It’s easy to be a skeptic from the sidelines, from the comfort of knowing your words won’t be tested against the world. It’s when the skeptic refuses to let the quality of their thought deteriorate just because they’re in the opposition that they truly earn the title of skeptic.
That’s why I propose we get rid of the negative connotation that still clings to that word, and pledge to remember that not everyone who stands in opposition does so out of hate. Sometimes, they do it out of the deepest kind of love.
Love this! It’s an idea I’ve tried to share with others too: AI skeptics are needed for a realistic understanding of what AI can and can’t do. I’ve especially appreciated Chollet’s grounded AI optimism.
The problem as I see it is today's internet is not meant for any serious questioning, accountability journalism or deeper reflection. Not only is this behavior typically not rewarded, it's actually punished because the status quo want to keep their Monopoly power.
If anything journalism continues to decline and along with it things like AI skepticism. While it may be fashionable to be contrarian now on networks like bluesky or substack, only a small fragment of Internet users care. If anything in the AI age of generative AI products, surveillance capitalism and behavior modification are stronger than ever.