Here's Why AI Isn't Worth My $20/Month
An unclear value proposition can destroy the greatest potential
I. Free seems to be the only acceptable price
Let’s keep the criticism pragmatic today.
The key feature of the latest AI model, GPT-4o, isn’t the real-time voice capabilities (may not scale to millions of users) or the Elo in the chatbot arena (could be gamed), but the fact that it’s free for all consumer users (not API or enterprise versions). Even if GPT-4o isn’t better than GPT-4 (as it was marketed), it’s a nice gift for people who used GPT-3.5 thinking it was the best (i.e. most).
But something bothers me. I’ve been wondering why OpenAI left no incentive for paid users—like me—to remain as such (disclosure: I’m no longer paying). Why would I spend $20/month for a higher usage limit? Why would I pay anything if I have to wait for months to get my hands on the voice features anyway?
Or, changing the question to understand OpenAI’s decision: What value are paid ChatGPT users (not API or enterprise) providing them in annual revenue?
Let’s do some napkin math.
The number of weekly active users has plateaued yet OpenAI has doubled its revenue in 2024 to $3.4 billion. This suggests some users are paying much more than others, in line with OpenAI’s focus on API and enterprise customers. 600,000+ enterprise customers at $60/month are already 430 million in ARR (12.7% of OpenAI’s total revenue). Calculating API revenue is impossible without exact data but constant upgrades and price reductions keep the most valuable clients happy; 2x speed at half the price is crazy. Nat Friedman asked for OpenAI’s revenue breakdown but no outsider seems to know. My guess is API is highest by far (possibly above 50%) with consumer second.
But don’t take my word for it. OpenAI’s focus on enterprise/API and the gift of GPT-4o makes the message clear: Keeping GPT-4 a paid offering in the playground is not worth the monetary return. So, they don’t incentivize me to pay because they don’t care if I don’t pay—because most people aren’t paying anyway.
OpenAI has secured robust revenue channels and the right acquaintances (yikes), so they’re now trying to grow the free user base by giving away the best they’ve got (and closing deals right, left, and center). In turn, they kill the competition by evaporating the margins. As I wrote elsewhere, Google, Anthropic (and Meta) are “left trying to make money with $20/month subscriptions for models that are … worse and much less known.”
II. A technological revolution you can't feel
In an episode of The Ezra Klein Show, Ezra Klein and Ethan Mollick discuss how to make artificial intelligence useful. Mollick, an evangelist, has emphasized for years the importance of learning by practice (e.g. “spend 10 hours experimenting”).
This idea, however, doesn’t quite convince Klein, a tech-savvy journalist, even though he acknowledges that artificial intelligence is “a tremendously powerful technology”:
There’s something of a paradox that has defined my experience with artificial intelligence in this particular moment. It’s clear we’re witnessing the advent of a wildly powerful technology, one that could transform the economy and the way we think about art and creativity and the value of human work itself. At the same time, I can’t for the life of me figure out how to use it in my own day-to-day job.
“I can’t for the life of me figure out how to use it in my own day-to-day job.”
Does that sound familiar? I bet it does for many of you.
Many people feel what Klein feels—we’re on the verge of a technological revolution—but when it comes to experiencing it firsthand, the sensation evaporates like trying to describe a dream in the morning. It’s just… not there anymore. Why that weird disconnect between narrative and reality? Why is Klein’s inner conflict so relatable?
If a promised revolution doesn’t come easily, the result will be—it is already—saturation, indifference, and outright contempt. Companies promised everything and gave us something that, in the best case, we don’t know how to use and, in the worst, left us worse off than we were.
No wonder people don’t pay.
III. No, users don't refuse because they don't get it
I lurk in alpha AI bubbles. Here’s the most common take I’ve heard in 2024: “Why do people still use the free version of ChatGPT when for a few bucks you have access to substantially better tools like GPT-4, Gemini Advanced, and Claude 3”? (This changed after GPT-4o became the default model but remains a valid question.)
It feels true: No one I know pays for these tools. No one I know online who’s not in the bubble pays for them either. I’d even wager, without proof, that most users haven’t noticed ChatGPT was replaced by a more powerful version.
It’s true—not as an opinion but as a verifiable fact—that you can get surprising amounts of performance improvements (I’m talking about the you-can’t-believe-how-much-better-this-is-until-you-try-it kind) worth much more than twenty dollars. I condemn the use we’re giving these tools but when used for intimate purposes instead of deceptively making money, they’re a bargain if you take the time to learn.
I could only conclude that the correct explanation for why this seemingly crazy world doesn’t value innovation—as soon as it takes some effort or money to extract the juice—is the simplest one: People are either lazy and uncurious (what a surprise) or too angry at artificial intelligence companies that giving them money isn’t an option.
People’s inability to ascribe the right value to AI prevents them from paying the price.
Well…
I was wrong.