xAI and DeepSeek Have Shattered the AI Industry’s Golden Tenet
OpenAI, Google, and Anthropic can't ignore the new state of affairs anymore
Around this time last year, I wrote “The State of AI, 2024.” You guys loved it. I think the reason is that I showed both sides of the story. Despite the war between evangelists and skeptics, AI isn’t black-or-white. And since I have no emotional stake—if it thrives, I’m an enthusiast; if it fails, I’m a writer—I painted it gray.
On the one hand, the hype seemed to be fading. Companies struggled to turn innovation into revenue. Startups failed or were acqui-hired (e.g. Stability, Inflection, Character, Adept). Public excitement plateaued. Professionals saw AI tools as helpful but not transformative. Sky-high expectations had turned into unfulfilled promises. “High expectations can help a lot in the short term,” I noted, “but they entail a dangerous trade-off with trust.” Trust was broken.
On the other hand, history suggests this is normal: electricity, the printing press, and the internet all faced periods of doubt before reshaping society. I quoted economist Tyler Cowen, “Every revolutionary technology has a period when it feels not so exciting after all.” it was happening to AI. But despite the waning buzz, researchers and engineers pressed on. Models grew smarter. Infrastructure expanded. Open-source projects thrived. And I kept writing.
The question then, as now, was simple: “Can those building AI in the silence of their own conviction prove that it’s worth being called a revolution?”
Or, to put it another way: Is AI a cornerstone of the future, hitting a few bumps as it finds its place, or is it instead a fleeting fad that works just well enough to be propped up by greed and hype?
History favors the optimistic answer. I do, too. But it takes time.
Surely more time than we’ve had so far, because today, late February 2025, nearly a year later, I still find myself searching for a coherent narrative thread in what feels like a jumble of stories, hopes, and predictions. The IA picture is as gray, if not grayer, than it was beginning 2024.
I doubt the exploration I embark on with this post will achieve satisfactory clearance either—only the wisdom of hindsight and the passage of time can untangle the knots of history. It is of no use trying to figure out AI with analyses, debates, and blogs. It will become its own thing once it's ready, shaped by experience and use.
Still, we can lay the cards on the table and watch them unfold patiently. I’ll walk you through seven charts that, in my view, capture the seemingly paradoxical state of AI. Each one reflects a fundamental question, waiting anxiously to be answered:
What is the best path forward for AI companies: maximizing scale (xAI style) or efficiency (DeepSeek style)?
Should AGI remain the ultimate goal, or is it wiser to focus on measurable real-world impact (e.g., GDP, TFP, or HDI)?
How is it that AI models are becoming increasingly cheaper to run while simultaneously achieving higher performance?
Are AI models approaching AGI, or have researchers settled for narrow superhuman performance?
Why can AI models solve PhD-level problems in seconds yet stumble over the dumbest tricky puzzles?
ChatGPT has reached 400 million WAU, so how come people’s concerns over AI’s role in daily life are also growing?
Why am I not out of a job already?
To make it palatable, I’ll break this analysis into a series of seven posts, publishing them over the coming weeks. Each will attempt to answer—or at least clarify—one of these questions. Once complete, I’ll compile them into a single mega-post for easy reference.
The first, which I’m publishing today alongside this introduction, might be the most important. It also expands on the first entry in my latest weekly review and, in a way, builds on my recent posts about Grok 3 and DeepSeek.