At this point, asking what comes after generative AI sounds, to those as deep into the rabbit hole as I am, like asking what comes after death: The end never seems to arrive, and you wonder if something worse awaits on the other side—is it the Jabberwocky or just more wonky jabber from a new breed of AI models?
Maybe it’s heaven. But not even a godly paradise could make up for a two-year-long hellish hassle of financial stakeholders bombarding us with promises of a better world.
The better world will come but c’mon guys—you didn’t need to sell it so hard!
The headline offers a flicker of hope: we may be approaching the grand finale of the epic saga that generative AI has become—whether it ends in triumph or tragedy. Once it’s over, the disgruntled will return to their lives or seek a new social cause to champion or, well, forget it all like a bad dream in the morning. The early adopters who found generative AI useful and/or interesting (I belong to this group, in the spirit of transparency) will… also return to their lives.
That’s right—
No killer app has emerged.
No one-guy billion-dollar business either.
Adoption by non-tech enterprises is minimal and slow.
Stocks fall, investment stalls, and projects are swiftly dropped.
CapEx is high but revenue remains low despite unrealistic projections.
AI was never as ubiquitous as it felt scrolling Twitter—a tiny minority love it, a larger minority hate it, and the vast majority are either indifferent or ignorant.1
There’s no mass productivity improvement (except the typical self-congratulatory pat on the back) or economic growth. Merely anecdotal individual increases—and decreases.2
It seems the end of generative AI will be but a reversion to normalcy, not some sort of exponential utopia. Sorry, Sam. Sorry, Dario. Sorry, Demis. Sorry millenarians, but it seems nothing ever happens.
Hold up, Alberto!
What about coding agents like Cursor, Replit, Microsoft Copilot?
What about Anthropic’s newly announced general-purpose “computer use” feature?
What about Google NotebookLM, the viral AI podcast-maker?
What about large reasoning models (LRMs) like OpenAI’s o1 family?
What about DeepMind’s AlphaFold/Proteo and AlphaGeometry—breakthroughs in AI applied to biology and math?
What about Meta’s Orion, which Zuck predicts will replace the iPhone and become the primary computing platform by 2030?
What about our Machines of Loving Grace?
Are those things a reversion to normalcy, too? Are Sam, Dario, and Demis wrong when considering the long-term perspective?
Perhaps not.
The road ahead may not be as bleak as I’ve painted it. But to underscore how crucial it is for them to get it right—and how badly they’ve damaged their public image so far—I first have to lay out the bear case, the current state of play. It gets more bullish from here, both as you read on and, hopefully, as time progresses.
Anyway, bear with me now.
Attempts at countering this gloomy outlook need to offer a convincing argument (preferably without basing the case on extrapolating straight lines on a graph). Until then—interest, investment, productivity, economic growth, revenue, profit, and enterprise adoption aren’t as high as AI pundits claimed they would be.
I don’t think they expected their bolder promises to bear fruit, though. Here’s a secret: They painted a revolution more drastic than the greatest periods of progress, like the Industrial Revolution—but it was never their goal that it happened. AI products aren’t even their main offering. No—the centerpiece was this beautiful AI story. They wanted us to buy it. We did. Fine, they won. The least I can do is call out their bluffs and put the cards face up on the table.
Others are, as well. Some predict with thinly veiled pride the bubble to burst anytime now. Some are surely watching the situation unfold with uncanny schadenfreude. I wonder why. Usage remains stable—even growing—but people aren’t convinced of the tech’s worth. Or they’d pay the bargain price of $20 per month. Sadly, the lack of a user manual and clear value proposition obfuscates the onboarding process and steepens the learning curve.
Here’s an exchange I’ve had way too many times: “What is ChatGPT good for,” they ask. “Potentially many things but you have to figure it out yourself,” I say. “Then, why would I pay I can’t tell if it’s worth it?” they insist. “Try and decide.” They never do.
Even the hopeful ones, like Sam Altman, concede AI could be the “just” expected innovation of our times—not something that will trigger a new Industrial Revolution but what we desperately needed to bring back the “missing productivity gains of the last few years.”3
With AI the economy isn’t growing but without it, it’d keep shrinking, he says. Optimistic forecasts overall.
I warned about the dangers of “story-selling” generative AI more than a year ago (I republished it recently because the argument never went out of fashion):
Very few things in history have withstood the test of time to be called ‘revolutionary’. . . . It makes no sense to preemptively qualify as revolutionary an innovation that has not yet proven itself worthy of the burden of such responsibility.
Investors and enterprises don’t judge generative AI for what it is but for what it appears to be. The public judges it in the present for what they’re told it can be in the future. Tech CEOs embellished it. Journalists, analysts, bloggers, etc. followed suit. Eventually, Goodhart’s law made the garnish and the filigrees the selling point.
If an invention is touted as revolutionary, it won’t matter if it’s just great. It isn’t enough. Hyperbole is burden. Not inspiration.
Here’s another secret: Just like AI evangelists never intended for the technology to be as revolutionary as they claimed—an epochal, world-shattering force on par with humans becoming the apex species on Earth 100,000 years ago—they didn’t even believe it could.4
As blogger Matt Yglesias correctly points out, the divide isn’t between AI doomers (“AI will kill us all”) and AI optimists (“AI will be awesomely great”)—both consider AI the most important thing ever, for better or worse—but between normalizers, who think AI will be comparable to previous innovations, just not otherworldly, and believers, people who think superintelligence is possible, attainable, almost inevitable—a goal set up by divine will to happen in this decade’s calendar.
It turns out that almost every decision-maker in Big Tech is a normalizer; deep down a skeptic toward the ultimate goals of the field (yes, Matt, including Altman who is consistent in his grandiose predictions but only because he knows those who can make OpenAI thrive overlap annoyingly too much with those who are truly worried). When Sundar Pichai, Satya Nadella, Mark Zuckerberg, Elon Musk, and Jensen Huang shout “REVOLUTION!” they’re playing a magic trick on us.
They adhere to Arthur C. Clarke’s famous observation: “Any sufficiently advanced technology is indistinguishable from magic.” So, like magicians, they’ve perfected sleight of hand and misdirection, baiting us into mistaking for a magic wand what could be described as a wooden stick. Is it generative AI either? Not really. I find it valuable. To me, it’s somewhere between a staff and a cane, a category of tool that has repeatedly proved moderately useful in a few non-standard and low-stakes situations—pretty much like all newborn digital technologies.
So I guess I’m a normalizer. That’s why I remained skeptical—like them.
The world didn’t resist their mentalism. It bought the revolutionary story and embraced the false belief. ChatGPT was the perfect gimmick, a slick-sounding deal. Turned out to be a fantasy driven by vested incentives and human gullibility.
The bearish phase we’re enduring is the outcome of the fading spell cast by these sly wizards. After an inevitable requiem—a solemn rest to realign our expectations with reality—we’re ready for the rise of the bull.
It’s time for rebirth.
For renaissance.
Studies show conflicting results, from “devs gain little from AI coding assistants” to “it does help, especially for less experienced devs,” so I can only tell you to Beware the man of one study.
The silver lining comes at the individual level. 1 in 4 Americans used generative AI in August 2024 and usage keeps growing for ChatGPT. However, we have yet to see how much usage drops once companies decide they can’t keep draining money or the market readjusts after the bubble pops (or most competitors are acquired by the tech behemoths). Again, one study is worthless and we’re still early to claim success or not. After all, setting the bar at $20/month already reduces the number from 350 million total subs to 11 million paid subs for ChatGPT (that’s 3%, a worse ratio than this rusty newsletter!) People love to compare ChatGPT’s rapid early growth with Netflix’s and whatnot. For example, Netflix has 280 million paid subscribers, YouTube Premium has 100 million, and Spotify has 250 million.
That’s him when he’s trying to calm the mood. When he’s talking to investors and policymakers he uses quite a different style.
We’re talking here short term, as in “this decade”. It’s useless to discuss arbitrarily far long-term predictions because in infinitely much time anything that can happen will eventually happen—including AI being revolutionary.
History has a funny way of repeating itself. Back in the 80s, Expert Systems (aka. Applied AI) were all the rage and it was over-sold/hyped as well. Eventually the industry collapsed and we dropped into an AI Winter where everyone was skeptical of what AI could actually do. Sadly, i think we're heading to that same conclusion for GenAI. I think it's easy to think that GenAI is the only AI and that is far from the truth...
Be careful Alberto, you're getting close to the Gary Marcus bear den! In all seriousness though, I think one of the problems with adoption has been the speed of things. It seems as though every time you finally understand the applications and workflows, someone else has come out with a brand new product that changes what preceded it. For the casual AI user this becomes incredibly frustrating and time-consuming.