A team of five savvy experts (for lack of another way to group them under a label) has published a long and thorough report on what they predict will happen in AI between now (April 2025) and the end of 2027.
For reference, it belongs together with “Situational Awareness” and “Machines of Loving Grace.” Similar quality, similar depth, similar angle. The main difference is its extreme concreteness (easily testable in retrospect but harder to get right).
For a non-technical audience, the “AI 2027” report can be summarized—obviously incurring some unfairness—with its initial sentence:
We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.
I’ve read it in full. You should, too, if you care about the short-term future of AI (or you can watch the 3-hour Dwarkesh Patel podcast episode). But fair warning: parts of it are very technical, and others sound straight out of science fiction. Still, it’s valuable for one reason above all: it shows what “taking forecasts seriously” looks like.
This isn’t a summary but rather a set of general remarks on what I see as the report’s main flaws. Higher concreteness—which is laudable—allows for stronger criticism. It also makes the authors’ biases more obvious. Two that I saw right away: they're heavily pro-American and belong to, or are adjacent to, the AI safety/AI alignment movement. That said, I don’t mean to downplay their effort; no report of this kind is free of bias, and no person is free of partisanship.
The first half (2025-2026) is great. I agree with most predictions—e.g., agentic models, coding automation, job losses—and have learned plenty that I didn’t know (or hadn’t thought about with such level of detail).
The one point I’d firmly challenge is China’s stated role.
The authors seriously underestimate the ability of Chinese researchers to outpace their Western counterparts in designing better algorithms—without needing to steal them (at times, the report reads like the plot of a pro-America, military-grade sci-fi blockbuster). I get their rationale: The US is slightly ahead, so companies like OpenAI or Google might reach the “AI that makes better AI” tipping point first, achieving escape velocity before China catches up (the dream—or nightmare—of recursive self-improvement).
However, this strikes me as overly optimistic for the US, given that it’s still nowhere near the “escape velocity” milestone. Until then, it’s human researchers who matter most, and China has likely already caught up on the algorithmic front. Just look at the DeepSeek team’s unexpectedly strong technical and engineering chops. And they’re not alone: Alibaba (Qwen and QwQ), Moonshot AI (Kimi), 01.AI (Yi), and others are making serious moves. Not to mention China’s egregious head start on robotics.
But still, I agree with the rest in the first half and could be wrong about China. My main qualm—and the reason I’m skeptical of their predictions—is the “2027 onward” half. (Note that some of the authors are professional forecasters; they’re experts on doing things like this. Meanwhile, I am no one, so perhaps you should account for that in your prediction of the accuracy of this post.)
They admit they can't ground the scenarios beyond 2026 as precisely as up to that point and explain why: AIs making better AIs compounds exponentially in a way that’s hard to forecast. It is as if humans became biologically smarter—our brains bigger—with every invention we make. The wheel? Let’s add a couple billion neurons to the cortex. The printing press? Another five. The internet? Perhaps 10 billion, then. We’d not only have more tools and stuff than Socrates and Plato but brains twice as large.
So it’s a reasonable caveat. They’re careful not to portray their research as infallible. However, they still summarize the year 2027, as they predict it, like this:
Over the course of 2027, the AIs improve from being able to mostly do the job of an OpenBrain [invented AI company to act as placeholder] research engineer to eclipsing all humans at all tasks.
It’s a bold prediction but not an implausible one, given the trajectory set by these past few years of rapid progress. They admit low confidence in their timelines, and I’m not here to nitpick that. What I question is the underlying assumption: that once AI reaches what they define as “superintelligence” (i.e., “better than the best humans at everything that doesn’t require a body”), the world will suddenly fall under its sway.
Here’s my response, condensed: The fact that AI can do everything better than the best humans doesn't immediately imply that it is doing so in the real world.
This detail, which the authors don't explore much, is crucial. Superintelligence’s measurable effect on the world—either by itself or through our deployment and use of it—will be gradual and incremental. I don’t think it’s trivial to go from “here’s this thing we’ve just created” to “the world has been revolutionized by this thing.” They make it feel trivial.
It may seem that I’m rejecting the “hard takeoff” scenario, but I’m just rejecting its implied premise, as presented by the authors of the report. I simply don’t think superintelligence is, or can be, the same as omnipotence, and thus, revolution takes time. There's a huge gap between those two things. This entire conversation feels so sci-fi that it’s easy to make that misstep. I don’t think the authors made it, but they hand-waved the gap too much for my taste.
I believe there is unassailable friction between superintelligence and revolution. When you solve intelligence, the bottleneck for effective change doesn't disappear; it moves somewhere else, and frictions that were less important before become significant.
Political: People in charge will panic: “I will shut down this little experiment of yours in the name of national security!” Or, alternatively (and perhaps more realistically), remain oblivious: “Ok, you made a funny AI demo today, so I assume I can expect a better funny AI demo next year.” You better lobby convincingly.
Geopolitical: Do we really want to repeat the mistakes we made with nuclear weapons? There's no “non-proliferation treaty” when the weapon is alive and making copies of itself. “But China!” They will probably stop before the US.
Social: Imagine millions of people in the streets protesting for the high risk of mass unemployment—that’s what they can mentally afford to worry about. How do you think the government will react, by sending them the military or by facing the pariahs still pushing for “their science fiction delusions”?
Cultural: Who has the time to deal with an alien intelligence without our manners or customs—or biological drives and needs—when we don't even get along over skin color or birthplace? I can tolerate anything except… what I don’t understand.
Psychological: You can't trivially update the behavior of 50-year-old workers who are busy dealing with 100-year-old workflows. And however cool your AIs are, we can't just let go of all these people (or we worsen the “social” point above).
Logistical: Digital AI is useful, but what if embodiment proves to be a strong requirement for superintelligence? Can your “country of geniuses in a datacenter” manufacture robots as quickly as it updates its weights?
Economic: CapEx and OpEx are through the roof, yet GDP and TFP haven’t moved. Why should I and my deep pockets believe your AI narrative? Or, alternatively, why don't we throw in some tariffs for everybody and see what happens!
Bureaucratic: Can you imagine the paperwork required to authorize a superintelligence? You’d need another superintelligence just to file it. Bootstrapping paradoxes aside, I wouldn't blame it for becoming a paperclip maximizer after that out of spite.
Some of these will be irrelevant (who cares about human workers in a post-AGI world, right?), but I don't think you can seamlessly turn a superintelligence into a fix for these frictions. Or fix them yourself by lobbying convincingly and whatnot. Combined, they’re an undeniable hurdle. I picture the AI feeling like the proverbial unstoppable force who’s met its equal, thinking through the obstacles at the speed Usain Bolt runs through honey, powerful but impotent.
I can’t imagine timelines as short as theirs to be meaningful without a clear account for the huge—like, terrifyingly huge—inertia of our world. The future being “unevenly distributed,” as author William Gibson said, is not just a testament to the intrinsic inequality of our civilization but an observation of the clash of forces—some trying to move it forward, others trying to keep it in place—that impose on it a heterogeneous elastic tension. In case I failed to convince you, let me share two other quotes that reflect my vision.
Economist Robert Solow:
You can see the computer age everywhere but in the productivity statistics.
Or, as I wrote recently, “If AI is so great, someone should tell GDP.”
Scientist Roy Amara:
We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.
It will come, but it takes time.
And a third one, on the house, brought up thanks to the good news coming from America:
Sometimes, things do happen that thwart your day, your plans, and your predictions. And your lifetime savings. And carefully designed global trade mechanisms that pushed the world's economy forward for 70 years.
Suck that one up, AI.
There’s too many buts and ifs, too many obstacles, both technical, logistical, economical, and so on. Also, consider second and third order effects. There’s significant pullback and criticism of current AI systems making people less capable, imagine a superintelligent system. I’m considering if we will even choose to have superintelligent do everything for us. The societal changes are too large, and the technology just isn’t there yet. Will take decades, not years.
Mad props for the reference to "I Can Tolerate Anything Except The Outgroup"
Also, you stole your list of unassailable frictions to the introduction of Superintellingence from my private thoughts produced while facilitating AI adoption into lawyers' workflows.