You Guys Didn’t Need to Sell AI So Hard (PART I: REQUIEM)
I could summarize the first part of this series by rephrasing the headline as “What comes after this pile of failures?”
Not enough revenue or profits, not enough paying users or enterprise customers, and not enough good ideas to transform into thriving startups are signs of market failure. Add to that skyrocketing valuations and over-investment and you have the symptom profile of an industry-wide bubble. Worst of all, tech companies pitched an AI revolution they had no intent to fulfill and sold wild visions of the future—machine superintelligence—they never truly believed in.
So the constant hyperbole around generative AI was an unjustified burden. We agreed on that much. But not because the tech is useless, as many critics assume, but for the opposite reason: the tech’s untapped potential would have been enough by itself for eager CEOs and wealthy shareholders to realize the succulent treat. I agree with Anthropic CEO Dario Amodei on this: AI’s upside realizes naturally—almost spontaneously—if we let it.1
But they didn't.
However, calling generative AI just a “pile of failures” is neither fair nor fully accurate. My intention isn’t to dunk on the past two years. Failures, indeed, but what else breeds success and progress? That's why I'm bullish regardless.
My task today—which may be surprising to those of you who agreed with me last time—is to call out the anti-hype, which has gradually morphed into anti-hope (we went from “don't be so excited” to “there's nothing to be excited about” overnight).
It turns out, extreme anti-hype is also an unjustified burden.
Anti-hype should always match hype in force—never exceed it. That’s my rule of thumb: if you can’t discern what’s hype from what’s substance, you’ll never manage to exert an impartial opposition. From what I can tell, anti-hype is more powerful than hype nowadays (just read press headlines). That’s evidence we’re doing it wrong. Doom-and-gloom has become a lucrative practice so detractors have double the motivation—hating AI for pleasure and cashing out on it.
More often than not, those who over-hype in the extreme and those who anti-hype in the extreme are cut from the same cloth. Both care less about the truth and more about pushing their annoying agendas. I mean, we can all use ChatGPT—can you stop saying it’s useless just because it somehow fails to count R's or compare numbers? Everyone else has moved on.
Anyway, in the gaps between embellished promise and catastrophic demise—where I want us to be—there's plenty of unfulfilled utility. So I won't overstate the downside just like I think they shouldn't have overstated the upside.
A different question is, however, whether the bubble will cause irreversible damage to the field once it pops. That’s not vapid anti-AI but a serious concern.
Here's my take: When the much-discussed bubble eventually gives way, it will serve as an implicit admission of past failures. Tech CEOs will have to rein in their grandiose declarations and adhere to the script of Truth. That's good, albeit insufficient. Thankfully, the bubble has another role to play. A key role: Rather than halting the renaissance, as fervent anti-hypers believe, the popping of the bubble may well be the very trigger that makes it possible.
I will try to convince you that's the case. Let me tell you what I see.
Stakeholders will push big tech companies to reduce their AI-focused CapEx, effectively acting as a self-regulating mechanism to compensate for over-expenses.
Most AI startups will go bankrupt or get “acqui-hired”. Either because they were shady, like Stability and Rabbit. Because they were useless, like Inflection and Humane. Or because they realized revenue would never be enough to cover costs, like Character and, soon-to-be, Friend.
Investors will tire of waiting for their promised ROIs and will move on or risk losing big. Companies will have to become more effective or give up.
Even chipmakers and cloud hyperscalers—the real winners of the generative AI frenzy (some more than others)—will find out that every bubble eventually pops.
These are all hints of an unwinding phase (we may still be far from the peak, though). Hype-moved investors who went crazy during the generative AI boom will lose everything at the requiem, and won't get back their money during the renaissance.
The bubble will get the best of them. For good.
But their loss is our win. If you remember only one idea from this article, let it be this: Their loss builds our world.
Millionaire investors throwing away their money during a heavily innovative phase means they bought the hype and overinvested moved by conviction and greed. But their dollars build the infrastructure nonetheless. (Microsoft, Google, and Amazon recently reported nuclear energy contracts are an example of this.)
During the Canal Boom of the 18th century, a bubble formed. Many lost money. Yet we gained a world crisscrossed by waterways. Under the Railway Mania of the 19th century, a bubble formed. Many lost money. Yet we gained a world bound by iron. In the Internet Frenzy of the 20th century, a bubble formed. Many lost money. Yet we gained a world connected by fiber.
A financial bubble isn’t the end of technology but its true beginning.
With AI it’s the same: the fabs and foundries, the ASICs and GPUs, the huge datacenters, the LLMs, and the chatbots are here anyway. You may think they’re an unnecessary expense (or a total waste of resources if you don’t like AI) but, at least, the money didn’t just feed the machine of financial capitalism.2 It created tangible wealth.
I agree with Noah “normalizer” Smith here:
The future of AI is just going to be like every other technology. There'll be a giant expensive build-out of infrastructure, followed by a huge bust when people realize they don't really know how to use AI productively, followed by a slow revival as they figure it out.
In short: the investment bubble will pop but mid-term and long-term it won’t matter at the application level. Not even at societal, cultural, or economic levels.3
A few years after the collapse, we’ll remember the AI craze like we do the dot-com bubble today. From the seeds that endure, a new wave of powerhouse companies will emerge—the new Amazons, Googles, and Facebooks. AI applications will know no boundaries. Across sectors, they’ll boost productivity and drive economic growth.
And the world will be better for it.4
Hype and anti-hype will both fade, making way for three features that will mark the next phase: market stability, industry maturation, and new usage habits. Habits that will turn into customs we’ll all come to share—hypers, critics, and syntheticals alike.5
“Market stability” means the number of winners will be tiny. (But when wasn’t Big Tech winning over small labs and open-source efforts anyway?) That in turn increases adoption because Big Tech, owners of the internet, can safely integrate AI across its whole spectrum of products—Gemini Live, Apple Intelligence, Microsoft Copilot, Meta AI—without worrying about little competitors trying to become the innovator's dilemma’s poster child.6
“Industry maturation” means that across non-tech enterprises, action-by-FOMO, prevalent before the requiem, will give way to action-by-utility after the renaissance.
“New usage habits” is the trickiest. Because it requires people (ugh). Will we like generative AI then, suddenly? Will we forget this “two-year-long hellish hassle” of unwarranted promises of a better world?7 Don’t count on it. Haters will still hate and the excitement of the anodyne majority will dwindle. However, that always happens with consumer tech. It’s crazy when you think about it, but I bet you no longer notice the high-tech, digital-Swiss-Army-Knife silicon slab you keep in your pocket.
That’s the point!
That’s the natural process of the tech bubble popping: Turmoil and over-enthusiasm intertwine at first, as the bubble grows, and then, after the breakdown, a low-hype, no-expectations—even boring—phase follows. If the technology is worthy, it eventually gives way to a resurgence, as Noah says. That’s when we move on from novelty and, in its place, utility, stability, maturation, and new habits emerge.
From requiem to renaissance, to a long spring of good ol’ normalcy. But they have to get to work—honest, empathetic work.
Or the buds won't bloom.
You don’t need to lure me into buying a chocolate cake, just show it to me and I'm sold! But if I catch you hijacking my brain with marketing tricks, oh—you’re my enemy for life.
I respect people (e.g. Grady Booch) who argue that the infrastructure that powers the current AI boom is “an unnecessary expense” because we’d be better off devoting those same resources (money, time, talent) to finding better architectures (e.g. the human brain requires orders of magnitude less energy than a datacenter and remains strictly smarter). Why are AI companies sticking to GPU-like chips when it’s not the optimal design for intelligence? The best counter-argument against this is not “Nah, we do have the right architecture” but “We’re doing this to find the right architecture.” It’s hard to disagree with that unproven but potentially fruitful claim: current AI may help us do it better than we can alone. Of course, this could also be delusional hope, but given that Big Tech is already using AI to write a substantial fraction of their code, I’m not so sure.
There’s a Spanish expression for which no direct English translation exists: “La avaricia rompe el saco.” It means “greed breaks the sack.” We shouldn’t forget, however, that greed fills it first. Likewise, we love to recall the dot-com crash as a terrible precedent but often overlook just how dramatically society and culture have transformed since 1995, at least in the virtual realm (not that it’s all for the better, but the change is undeniable).
If they don’t fuck it up. More about this in the next part of the series.
This is why it’s misguided to say, “AI won’t replace you; a person using AI will,” without realizing that the person in question is likely you, using AI.
A few big players dominating an entire market stabilize it over time but it’s not good for healthy competition or us, consumers. If Nividia, Google, Microsoft, Apple, and Meta end up owning 100% of the AI market, the world will be more unequal as a result. A bit of inequality is necessary for progress but too much hinders it. Give the winners their rewards too soon and you’ll ensure they don’t move a finger ever again to innovate out of their local minima.
To judge whether the current build up of the AI industry we need to remember that it is not just a technology. It is evolving into a new type of intelligence. And it is doing it at an exponential pace. Therfore, comparing it to previous inventions such as electricity or nuclear fusion is misleading. This is reflected in how the market has such problems with the Nasdaq stock valuation.
> I mean, we can all use ChatGPT—can you stop saying it’s useless just because it somehow fails to count R's or compare numbers? Everyone else has moved on.
As a non-native English speaker, advancements in GenAI have made me ten times more productive in some areas—no exaggeration. Instead of spending hours rewriting documents and emails to sound natural, I can now focus solely on ideas. I wonder if critics fully grasp this non-native aspect and its associated productivity gains… Those who think in one language and communicate in another understand that ChatGPT (or similar tools) simply works.