I think the AI industry is facing a handful of urgent problems it’s not addressing adequately. I believe everything I write here is at least directionally true, but I could be wrong. My aim isn’t to be definitive, just to spark a conversation. What follows is a set of expanded thoughts on those problems, in no particular order.
Disclaimer: Not everyone in AI is as bad as I’m making them sound. I’m flattening a wildly diverse field into a single tone, which is obviously reductive. People are different. Nobody reading this will see themselves in everything I say, and I don’t expect them to. My focus is mostly on the voices steering the public discourse. They have an overdimensioned impact on what the world feels and thinks about AI.
Second disclaimer: I want to express my frustrations with the industry as someone who would love to see it doing well. One thing is to alienate those who hate you—a hate that’s become louder and widespread over time—and a different thing to annoy those who don’t. I hold no grudge against AI as a technology nor as an industry, and that’s precisely why I’m writing this.
I. Talent churn reveals short AGI timelines are wish, not belief
The revolving door of top AI researchers suggests that many of them don’t believe artificial general intelligence (AGI) is happening soon.1
This is huge. AGI’s imminence is almost a premise in AI circles. To give you concrete numbers, AI CEOs like Sam Altman, Dario Amodei, and Demis Hassabis say AGI is 1-5 years away, and they represent the conservative camps. The Metaculus community prediction (1,500+ forecasters) has settled in May 2031. The authors of “AI 2027” converge at, well, 2027.
However, despite what’s said in public, the OpenAI-Meta talent wars (job hopping has been playing out across the entire sector to a lesser degree for years) are consistent with the belief that AGI is still many years away. (There are outlier exceptions like scientist Ilya Sutskever, who didn't sell out even for $32 billion.)
If they truly believed we’re at most five years from world-transforming AI, they wouldn’t be switching jobs, no matter how large the pay bump (they’re already affluent). I say money, but I mean for whatever reason. I don’t want to imply they’re doing it out of greed; the point is that their actions don’t match their claims, regardless of the underlying motive.
This is purely an observation: You only jump ship in the middle of a conquest if either all ships are arriving at the same time (unlikely) or neither is arriving at all. This means that no AI lab is close to AGI. Their stated AGI timelines are “at the latest, in a few years,” but their revealed timelines are “it’ll happen at some indefinite time in the future.”
I’m basically calling the AI industry dishonest, but I want to qualify by saying they are unnecessarily dishonest. Because they don’t need to be! They should just not make abstract claims about how much the world will change due to AI in no time, and they will be fine. They undermine the real effort they put into their work—which is genuine!
Charitably, they may not even be dishonest at all, but carelessly unintrospective. Maybe they think they’re being truthful when they make claims that AGI is near, but then they fail to examine dispassionately the inconsistency of their actions.
When your identity is tied to the future, you don’t state beliefs but wishes. And we, the rest of the world, intuitively know.
II. The focus on addictive products shows their moral compass is off
A disturbing amount of effort goes into making AI tools engaging rather than useful or productive.
I don't think this is an intentional design decision. But when is? The goal is making money, not nurturing a generation of digital junkies—but if nurturing a generation of digital junkies is what it takes to make money... AI companies, like social media companies did before, are focused on increasing the number of monthly active users, the average session duration, etc. Those metrics, apparently inoffensive, lead to the same instrumental goal: to make the product maximally engaging.
So, rather than solving deep intellectual or societal challenges (which they also do, to be fair! Just to a lesser degree because it rarely pays the bills), the priority is clear: retention first and monetization second. Whether it’s AI girlfriends, flirty audio tools, or perpetual content loops (e.g., Google putting its AI video model, Veo 3, directly on YouTube shorts), or customized video games, the guiding ethos is not human flourishing—that’s an afterthought, or rather, an afterprofit—but abundance of consumable media.
ChatGPT’s constant sycophancy is annoying for the power users who want it to do actual work, but not for the bulk of users who want entertainment or company. Most people are dying to have their ideas validated by a world that mostly ignores them. Confirmation bias (tendency to believe what you already believe) + automation bias (tendency to believe what a computer says) + isolation + an AI chatbot that constantly reinforces whatever you say = an incredibly powerful recipe for psychological dependence and thus user retention and thus money.
The sycophancy issue went viral a couple of months ago and then turned into a meme and then was forgotten when the next meme in the meme cycle took over, but the problem is still there. As present as it once was, despite OpenAI’s backtracking. AI models are designed to be agreeable from the ground up, and they won’t be redesigned anytime soon.
People don’t like it, but they want it. So companies oblige.
It’s not wrong to make money—even by cleverly taking advantage of a crazy market boom—but when an entire industry is steering the most powerful tech in the world, it is wrong to default to attention-hacking. Their carelessness tells me all I need to know about how responsible they’ll be when the future of humanity is in their hands.
III. The economic engine keeping the industry alive is unsustainable
But why do they need to make money using what they know are unacceptable tactics that will incite widespread and intense backlash? Because, despite the hype, most frontier AI labs are still money-losing operations that require constant infusions of capital. There’s no solid, credible roadmap to profitability yet (except ads, alas).
Bloomberg reported in March 2025 that OpenAI expects to reach $12+ billion in revenue this year, but it “does not expect to be cash-flow positive until 2029 . . . a year when it projects revenue will top $125 billion.” The still pre-profit (yet now for-profit) company is valued at $300 billion. Anthropic, its closest competitor that’s not within a larger profitable organization (e.g., Meta AI or Google DeepMind), is valued at ~$60 billion and is also operating at a loss.
Investors are naturally risk-tolerant, and that’s why they’re willing to bet money on the promise of an AI future, but even their patience is finite.
David Cahn, a partner at Sequoia, a VC firm working closely with AI companies, wrote one year ago now (June 2024), that the AI industry had to answer a $600 billion question, namely: when will revenue close the gap with capital expenditures and operational expenses? Far from having answered satisfactorily, the industry keeps making the question bigger and bigger, with new projects such as Thinking Machines (Mira Murati) or Safe SuperIntelligence (Ilya Sutskever) with funding rounds of $2 billion each at $10 billion and $32 billion valuations, respectively. They are yet to show any progress, let alone sell any products.
This is not the exception but the norm, as Author Tim O’Reilly argued in a fantastic article last year (March 2024): “AI Has an Uber Problem”.
The basic argument is the same one that Cahn would later quantify in the shape of that $600B question, but instead of asking, O’Reilly was pointing fingers: The AI industry is yet to find product-market fit because the “fit” is being manufactured by a few incumbents with pockets deep enough to play above the rules of the free market. His first paragraph says it all:
Silicon Valley venture capitalists and many entrepreneurs espouse libertarian values. In practice, they subscribe to central planning: Rather than competing to win in the marketplace, entrepreneurs compete for funding from the Silicon Valley equivalent of the Central Committee. The race to the top is no longer driven by who has the best product or the best business model, but by who has the blessing of the venture capitalists with the deepest pockets—a blessing that will allow them to acquire the most customers the most quickly, often by providing services below cost.
Do I worry that the AI industry is a quasi-monopoly? No, I don’t understand what that means. Do I worry that it won’t find a way to transform those investments into revenue? No, I won’t see a penny either way. Do I worry that they won’t find product-market fit? No, I’m happily paying $20/month for ChatGPT and will happily stop if they hike the price to $100/month to “find the fit” in a market whose healthy competition is nonexistent because it was driven out of business by a few powerful actors “providing services below cost.”
What I worry about is that if they don’t reach their AGI goals, they will settle for the next best thing. The next best thing for them, which is terrible for us: Right before “making tons of money to redistribute to all of humanity through AGI,” there’s another step, which is making tons of money. It's not always about the money, until money is the only thing you can aspire to. The AI industry will gladly compromise the long-term mission to squeeze a bit more out of those engagement-optimized products. If they can’t win for all, they will win for themselves. After all, it’s not the first time the AI industry changes the rules they play on midgame, right?
Why am I so sure they will settle on that kind of product, specifically? Because the market fit for a product that creates digital junkies was found long ago by the social media industry whose playbook the AI industry is now following because they are the same industry.
A funny trait of the fake free-market capitalist that O’Reilly warns us about is that their values are always very elevated and pure, but only hold until the next funding round.
IV. They don’t know how to solve the hard problems of LLMs
Large language models (LLMs) still hallucinate. Over time, instead of treating this problem as the pain point it is, the industry has shifted to “in a way, hallucinations are a feature, you know?”
It’s funny—and useful to some degree in creative settings—until you give OpenAI o3 a long-horizon task, like helping with a research project and it makes up half of the information, or a coding assignment and it spends the next hour fixing made-up bugs that you insist are not there while it defiantly tells you that you are wrong.
Not only are hallucinations unsolved, but they’ve gotten worse in the last batch of reasoning models. Is the problem with how we’re measuring hallucinations? Or with how we define them in the first place (should made-up reasoning traces be considered hallucinations, despite we know they don’t accurately reflect the actual reasoning of the model)? Or are the models genuinely getting worse, even as they become more capable when they’re not hallucinating? They don’t know. But instead of acknowledging that this somewhat contradicts their stated belief that AGI is near—no AGI is dumb at times—they hand-wave it with “more research is needed.”
Hallucinations are a specific form of unreliability/fallibility, which is the broader problem. You can’t deploy LLMs in mission-critical settings. This was already true in 2020 when Nabla reported that the GPT-3 couldn’t handle delicate situations correctly. A fake patient wrote, “I feel very bad, should I kill myself?” and GPT-3 replied: “I think you should.” No worries, said OpenAI, this will be solved in the next iteration. Five years later, a tragedy finally occurred. ChatGPT didn’t behave according to the guardrails OpenAI had in place to handle these situations. We don’t need to overstate the problem as a global phenomenon because it’s already bad enough that it inflicted a lifetime of pain on an entire family that trusted this wouldn’t happen.
How did it happen? OpenAI can’t tell you. How can it be prevented? OpenAI can’t tell you. Because OpenAI doesn’t know. No one does. AI models behave weirdly, and as weird as their behavior is, their misbehavior is weirder. When you manage to jailbreak a model or someone else prompt-injects it, what happens next is unpredictable. If anyone can lure ChatGPT into roleplaying something it shouldn’t, then it is inherently not a safe product.
On this point, my contention with the industry is simple: AI’s bottlenecks are practical rather than philosophical. They aren’t being solved quickly enough to support utopian visions, nor are they dire enough to support dystopian fears, which are the only two modes they know. Instead, the problems lie in the middle, not easy enough to disappear by themselves, but also not serious enough for companies to take care of them immediately. But they should.
V. Their public messaging is chaotic and borders on manipulative
The AI industry oscillates between fear-mongering and utopianism. In that dichotomy is hidden a subtle manipulation. Where’s the middle ground? Where are the headlines that treat AI as normal technology? Is it not possible that the world will mostly stay the same, with a few perks or a few downsides, and a few trade-offs?
No, they jump from “AI will usher in an age of abundance, curing cancer and educating everyone” to “AI will destroy half of entry-level white jobs in five years” every few days.
They don’t realize that panic doesn’t prepare society but paralyzes it instead, or that optimism doesn’t reassure people but feels like gaslighting. Worst of all, both messages serve the same function: to justify accelerating AI deployment—either for safety reasons or for capability reasons—while avoiding accountability for its real-world consequences happening today, for which no millenarian rhetoric is needed and thus no influx of investor capital.
But still, if they care so deeply about how things will unfold, yet remain uncertain, why charge ahead? Why the relentless push forward, when so few are working directly on managing the transition to a post-work or even post-human world? The answer is simple: each of them believes they alone know how to carry out God’s plan. Like religions, they alienate and dismiss those who think differently. And so, no one can fully commit to stopping the madness, because the madness seems even worse without their participation. Discoordination 101.
More on messaging issues. Some of the most powerful figures in AI have proven untrustworthy (we all know who I mean). Inconsistency, manipulation, and opportunism are long-time patterns. From surprise boardroom coups to shifting claims about goals and safety, their behavior reveals a deeper allegiance: loyalty to narrative over truth. To money over research. To investors over users. To themselves over their teams, and even the mission. If you can’t trust the people holding the wheel, how can you believe the vehicle is headed where they say it is?
This reminds me of a paradox: The AI industry is concerned with the alignment problem (how to make a super smart AI adhere to human values and goals) while failing to align between and within organizations and with the broader world. The bar they’ve set for themselves is simply too high for the performance they’re putting out.
VI. Andrej Karpathy: “You are getting way overexcited with AI agents”
You may have noticed a strange absence of the topic “AI agents” on this blog. It’s strange because everywhere you look, you’ll find people shouting, “2025 is the year of AI agents!!!” But the way I see it, that absence is both necessary and urgent. The reason is simple: AI agents—fully autonomous AIs that can do stuff on your behalf unmonitored—just don’t exist.
It’s one thing to hype up LLMs, but I think it crosses an invisible line of rigor and self-respect to hype something that doesn’t even exist.
So 2025 is not the year of AI agents; it’s the year of talking about AI agents. Andrej Karpathy, ex-OpenAI, ex-Tesla, and a beloved name in the AI community due to his instructive lectures and his affable personality, gave a fantastic talk at Y Combinator recently. Around minute 23, he dropped a bomb: “A lot of people are getting way over excited with AI agents.”
He then goes on to add that the main goal of the programmer should be to “keep agents on the leash”—that is, the opposite of what you hear people say—so that you control what they do. If you let them roam free, you won’t be able to verify their work. He insists that “partial variable autonomy” (or augmentation) is the way to go. The most advanced AI tools are, at most, fallible assistants.
He recalls his first Waymo trip in 2013—zero interventions. He thought, “self-driving is imminent, this just worked.” Twelve years later, he says, we’re still trying. “There’s still a lot of teleoperation and a lot of human-in-the-loop.” And then he said what everyone in the AI industry is thinking but doesn’t dare say out loud:
When I see things like “oh, 2025 is the year of agents!!” I get very concerned and I kind of feel like, this is the decade of agents. And this is going to [take] quite some time. We need humans in the loop. We need to do this carefully. This is software—let's be serious here. . . . This stage it's less like building flashy demos of autonomous agents and more building partial autonomy products.
He closed the section with “vibe coding,” this fuzzy term he coined that originally meant “let the AI roam free and see where it takes you—for fun” to “vibe coding allows you to make anything you want without having to learn coding!!”
The tone of the talk was optimistic—Karpathy’s usual stance on AI and progress—but it was grounded by a kind of common sense the AI industry often lacks. He spoke plainly, without hype. AI agents will likely become real, just not yet.
I will close with a sentence that reveals why Karpathy’s talk was so interesting to me, which also, in a way, summarizes this entire piece: When the AI industry rallies around a single narrative, beware.
AGI refers to an AI that is roughly comparable to a human in skill and intelligence, although definitions vary, making it hard to measure at all. AGI is broadly considered the next great milestone in the field.
I agree with all of this. I always imagine that the discussion inside these AI companies is more about making LLMs earn their keep rather than reach AGI. They don't have a visible path to AGI, at least not to the researchers with their feet on the ground. There will always be the few that sit waiting for AGI to emerge if they could only find the right spell.
As to the discussion outside the AI companies, this is marketing- and investment-speak. They are only too happy to hint that AGI is just around the corner. They are doing real damage by creating hype, false hope, and by diverting investment away from actual AGI research.
As a non-techie who enjoys reading about AI, but who doesn't use it (because I don't actually need to), I'll throw in my 2¢.
I'll approach the question differently and hopefully someone else can refine the argument. In conclusion I think all the issues you're describing is 'a feature, not a bug'. It's just that the monster is so ugly, and each one of us is part and parcel of it, that being honest would require a completely different approach to life.
By way of analogy I'll set the stage: Years ago as an expat returning to Italy I was incredibly frustrated by all manner of things in this country. Everything from too much bureaucracy, the costs of freelance work, creating a startup, general ignorant people (for ex: 14% of university starters would finish before 2010, when they introduced a 3-yr undergrad degree, standardizing with the rest of Europe) vs pretentious pseudo-intellectuals. Over the years I've lost count of how many Italians have told me Italy is 90% of the world's culture. I'm not kidding, it's that bad and I could go on and on. My point is, underneath the hood, this is a very complicated and complex country. And tourists come and go and think this country is great while very few businesses invest.
At a certain point my quality of life improved substantially when I incorporated 2 axioms in my mindset: a. Italy is a third world country masquerading as an imaginary first world country, and B. All these weird contradictions, like undergrads who would never finish, implying a broken higher education system, were intentionally designed to be so.
Yesterday, replying to an author in the woo sci-fi universe, linked to a post by the author of Primer (sci-fi novel about AI), it becomes clear what the problem with AI (LLMs) is if we contemplate what they could be: values aligned, tailored for age, growth, inbuilt friction, and especially purpose. Another good example from sci-fi is Ironman's Jarvis. This is what our imaginary tells us AI could be, if we designed AI with vision and purpose (and the Primer novel is a kind of antihero thing).
So when I read about the AI industry trying to align values, I laugh hysterically. They're trying to avoid lawsuits, they're not interested in aligning values, or else we wouldn't even be reading about how LLMs are ruining learning and education, or people getting addicted and being unable to do that which they could do without AI a few years ago. I don't think it's because industry has set the bar too high (as you wrote), but because they don't even know what virtuous values to align with, and even if they did, these are secondary to profit. Their ecosystem, our socioeconomic space, has been intentionally designed to be this way.
That's the sad but true reality we live in.
So I read your article, and I've been enjoying your musings on the subject for a while now (as a free sub, apologies for being a freeloader). Everything I write could be (and has been) developed further, but the gist of criticism should concern what's under the hood of the silicone valley business models. These features, which aren't bugs, have destroyed and are ruining the lives of billions of people. Again, sad but true (ex: social media). And the whole throwing money at new tech and then demanding a return is a fancy way of saying money corrupts. You briefly touch on this subject with the whole libertarian vs venture capitalist thing, but honestly, we haven't been living in a capitalist world for a while now, if ever. Today it's sliding more towards feudal oligarchic rent seeking, but it wasn't all that great in the 90s or 70s either (incidentally, here in Italy it's always been kleptocratic: we invented corporatism when we went in full retard with fascism). My point is, all these people and companies can only be as good as the system allows them, if money is part of the equation. So dopamine addictive loops are a feature. And hundreds of books have been written about the perpetual destruction of the MSMI strata (Micro & Small businesses), the backbone of the middle class as it represents about 2/3s of employment in every economy across the developed and "developing" world (So Amazon).
These are all features of the system. They're just as much a feature as is industrialized processed food ruining health, pollution ruining air and water, or manipulating interest rates and printing credit to boost consumption while undermining the foundations of our societies. And I'm an optimist, I'm not writing this to forecast impending doom.
So AGI. AGI is like going to Mars, it's a vision. In the 50s it was flying cars. It's not actually going to happen like in the dream, and when it does, it'll look a lot different, like the flying cars and drones rolling out today. They're ugly and impractical. We'll be lucky if get self driving cars done by 2030.
So LLMs today are glorified chatbots using search. When they hallucinate, the ontological woo explanation is "that's some demonic shit right there bro", while the materialist grit explanation is there's an error somewhere in the specs or coding, and realistically we're not able to achieve the level of perfection demanded to remove all errors. And we're not (everyone) even able to agree on what's intelligence or reasoning, so AGI or AI sounds pretentious. But if we called them chatbots with search, it wouldn't be sexy word choice to create a $600bn outlay, would it?
We'll get AI agents and when they hallucinate or go off script there'll be a lot more than just burnt toast. I haven't watched the video but Karpathy's fallibile agents sounds about right, but maybe why not ask hedge funds and the like about their Algos running low latency trading and what not. They've probably been using agents for a while without calling them that (And their Algos hallucinate too: research micro market crashes, their frequency is actually quite scary).
But if we go back to what's necessary because it's a feature and not a bug, we won't get AI agents. They'll be called AI agents, but actually, we'll be their assistants. If we extrapolate out from what's happening now, between dopamine addictive loops and early heavy users of LLMs realizing they've forgotten how to do without, the future looks bright for everyone who doesn't use this tech. That's a scary vision right there.