Why I Deleted ChatGPT After Three Years
Ads are only the symptom of a bigger problem
I. ChatGPT ads are a deal-breaker
OpenAI announced this week they will start testing ads in ChatGPT’s free tier. This means that if I pay, I’m safe. It also means that I’m forced to pay to be safe. It also means that, if I can’t or don’t want to pay, the tool I’ve been using as an assistant for three years will care about someone else’s interests before mine.
To me, this is a deal breaker.
I don’t put ads on this blog for that reason: once you optimize for someone other than the person you’re talking to, the exchange is compromised and so is the value you provide.
So far, users have reacted similarly on social media: some have threatened to leave, others have pointed out the apparent hypocrisy (Sam Altman said ads were a “last resort” in 2024), but most will shrug with the resigned acceptance we’ve developed toward platform enshittification; it’s ads, yes, so bad, yes, but there are ads everywhere so why bother?
Will this be the blow that breaks OpenAI’s momentum? Meh. Will users leave in droves, or will they stay and complain and then forget? Likely. Whatever the case, I don’t think ads will deal the killer blow, for the killer blow belongs to something else; ads are merely a symptom of a different kind of illness, one that doesn’t concern itself with users or ads or the willingness to do things right.
OpenAI’s pivot is a confession about a structural inevitability disguised as a rather apologetic press release:
The math isn’t mathing.
This newsletter stays ad-free because of paid subscribers. Join 40,000+ readers below
II. The economics of AI are bad
The economics of frontier AI models are incompatible with the neutral, broadly accessible systems we’ve been promised.
This is well known. Run the numbers on what it costs to give hundreds of millions of people access to ChatGPT. Every query that uses, say, GPT-5 costs OpenAI money (estimates vary, but even optimistic figures put it at pennies per query, reasoning tasks running even higher; not even Pro users at $200/month are profitable). Multiply those pennies by hundreds of millions of daily users—most of whom pay nothing and will pay nothing—and you’re burning through billions annually (OpenAI doesn’t deny this.)
This is where the bubble warnings come from (is generative AI a viable business?): circular deals that create no wealth, debt-enabled build outs, AI companies trying to become “too big to fail,” all betting that revenue will eventually catch up to costs (financial expert Sebastian Wallaby just published a guest essay on The New York Times claiming OpenAI will “run out of money” over the next 18 months). If the AI business doesn’t work out, the correction is what we’re seeing: they turn an industry/investor problem into a user problem. If revenue can’t cover costs, enshittification follows. Ads are just the first step and perhaps the least insidious.
Here’s the money situation. OpenAI has raised spectacular amounts of capital (~$58 billion total with an unprecedented $40 billion round last year, with the most recent valuation at a $500 billion), but capital is not revenue. Eventually, someone has to pay for the immense CapEx/OpEx (training and running AI models while renting GPU access from cloud providers like Microsoft). That’s where we users come in. The conversion rates from free to paid subscribers follow predictable patterns across consumer software; usually single-digit percentages (OpenAI is at ~5% per the latest information, at 35 million paid subs out of ~800 million weekly active users).
AI companies can optimize this (they do), but they can’t defy the power laws of online markets like they can’t defy gravity. Most users will stay free. So OpenAI has three options:
Raise prices, limiting access (high-price tiers exist, but free tiers can’t disappear without growth collapsing because the market is an unstable oligopoly rather than a stable monopoly).
Reduce costs by degrading quality or switching to smaller models (this already happens under the hood; that’s why the free tier can’t see the active model on ChatGPT, because it’s inadvertently replaced by a smaller one).
Find another revenue stream. Aka ads.
When OpenAI has repeatedly claimed that it will be profitable by 2030, it was never about the number of paying users (although they project 2.6 billion total users and 220 million paying users by 2030) but about ad revenue.
Putting ads in ChatGPT entails a huge risk of churn, backlash, trust erosion, competitive vulnerability, brand damage… but it’s hardly a revelation that an ad-free product is better than an ad-enabled one (other things being equal). Everyone knows that, just like everyone knew that Altman would change his mind about ads. The actual revelation is this: if the leading consumer AI company, with more capital and more paying users and more technical talent than perhaps any startup in history, looks at their options and concludes they can’t make the unit economics work without ads, allegedly a “last resort,” then who can?
The other players have different escape routes, but none of them solves the problem. Google subsidizes AI through its existing ad business, which means the commercial interests are already baked in; we just take them for granted. DeepMind is captured by Google and follows the same incentives. Anthropic focuses on enterprise and, for now, avoids consumer-scale deployment costs due to the smaller userbase and thus may not need to put ads on Claude. xAI has the Elonbucks. Meta is… has anyone seen Meta lately?
So what does this tell us about the technology itself? Besides “bubble!” it suggests that generative AI (large language models and chatbots) may not be a consumer product at all. It might be an enterprise tool that consumers get to borrow under degraded conditions, or a loss-leader subsidized by companies with other revenue streams, or a luxury good for those who can pay.
III. The unverifiable promises
Before I prophesize catastrophe, let’s look into OpenAI’s press release.
If you read what they offer, you will notice that they make some welcome promises and compromises. But to the extent that these caveats are acceptable, they’re structurally hard to verify. It won’t be me who doubts Altman’s intentions but pretty words that describe an unaccountable policy is not the kind of promise I’m inclined to buy into.
Consider this example: “Responses in ChatGPT will not be influenced by ads.” This is great! It’s also important! And also unknowable.
How would you know if this is true or if it changes at some point? You can’t audit the training data. You can’t see what fine-tuning has been applied or what behavioral goals. You can’t compare your response to some platonic “uninfluenced” version (unless you pay; expect tweets and memes of paid vs free responses soon). You have to trust the company’s word, and you can’t verify that trust over time.
Again, this criticism is not about Altman lacking candor but about the fact that you should never expect a for-profit company to go against its interests. There’s too much money to be made from letting people assume the strong interpretation of that promise (”my experience of ChatGPT won’t change! Yay!) and then taking the weak interpretation (”I don’t alter the responses, I’m just training ChatGPT’s model routing mechanism to select my preferred responses”).
Do you really think that an AI company like OpenAI, capable of training a bot to win the Mathematical Olympiad and a worldwide coding challenge, can’t train that same bot to maximize revenue as a function of advertising in a way that destroys the user experience without entering into conflict with their stated principles? That’s what they do for a living!
Even assuming good faith, the pressure is unidirectional. An advertiser complains that ChatGPT is “unfairly harsh” on their product category. Does OpenAI adjust? A brand threatens to pull spend because ChatGPT keeps recommending competitors. Does OpenAI respond? These reflect the ordinary course of business in advertising-supported platforms, so it’s safe to assume the same will happen here. OpenAI won’t go against itself.
Now consider the inverse example: why didn’t the press release say anything about how chat memory will be used to target ads?
Millions of people have used ChatGPT to engage in the most intimate kinds of conversations. All that information is ad fuel. I’ve written about this a couple of times before: your dreams, your fears, your psychological blind spots are now data for an ad company to manufacture the perfect hook to sell you some product you don’t want. In Why Ads on ChatGPT Are More Terrifying Than You Think, I wrote:
People have always been worried that phones listen to us; now they will be, quite literally, inside our minds. When you show frustration with your partner, the AI will realize you’re a prime target for dating apps; when you express anxiety with your job, the AI will note you’re a target for wellness apps, and so on. Who do you think is the main demographic they will be targeting? Kids and teenagers.
Oh, wait, sure, the press release also swears by privacy: “Conversations are private from advertisers.” But again, there’s a strong and a weak interpretation here. Strong (the user’s): “My data is completely safe, no one knows how sad I was last week! Yay!” Weak (OpenAI’s): “Ads will manifest differently according to chat history even if advertisers don’t know how.”
This weak-strong interpretation dichotomy is a useful framing to understand what’s going on and what to expect in general. OpenAI doesn’t need to explicitly influence responses or let the ad company see your history; they only need to make tiny adjustments to what gets surfaced, what examples ChatGPT chooses to show you, how it frames recommendations, etc.
“We don’t influence responses” is technically true but functionally misleading. Ad money lives in the gaps between what users assume and what the company effectively does.
The bottom line: Take the least charitable interpretation of the press release and you will be right. Assume ads have arrived because the math doesn’t work and you will be right. Accept that not all users are treated equally, and you will be right, which leads me to my final point.
IV. A two-tier epistemic environment
OpenAI is creating separate information environments based on ability to pay. The AI-rich and the AI-poor is now a reality. I’ve written about this in depth in The AI-Rich and the AI-Poor, and also as an epilogue to Why Ads on ChatGPT Are More Terrifying Than You Think. From the former, where I introduced the term:
From now on—and for the first time since AI was established as a consumer industry—this technology will slice the social terrain in two with a bottomless abyss in between and, as a result, two social classes will emerge.
I was talking about high-priced AI tiers (up to $2,000/month; companies stopped at $200/month for now), but it applies just as well to having ads in the free tier.
Paying users get the “pure” version without ads. Free users get the enshittified version. Over time, this split will generate asymmetries. The paid user pays to have their interests intact, the free user “pays” in the form of letting others have their interests put first. You want a new car? Paid user gets a response according to customized specs; free user gets three or four Volkswagen suggestions before realizing OpenAI signed a deal in 2024 with the German carmaker. You’re feeling anxious and ask for ways to cope. Paid user gets CBT techniques, breathing exercises, maybe a suggestion to talk to someone. Free users get a link to Headspace.com or InsightTimer.com.
(Imagine if the paragraph above were a ChatGPT message: how can you tell whether those meditation apps have paid for being chosen as examples or whether ChatGPT chose them according to surveys or other pseudo-neutral metrics? You don’t.)
The AI-rich vs AI-poor hierarchy is not hypothetical hand-wringing about inequality. There’s plenty of evidence from search and social media that this happens. AI assistants won’t be exempt from these dynamics just because they’re conversational rather than algorithmic; rather, the opposite is true: the degree to which there will be tier-dependent user experiences is going to be larger than on any other software category. And given that people nowadays use chatbots for everything, the degree to which these differences will impact their lives will also be greater.
Ads will introduce small, persistent biases in how information is presented, what sources are prioritized, which solutions get suggested first. These biases will be nearly impossible to detect on a per-interaction basis but could be quite significant in aggregate. That’s how the poverty of the AI-poor compounds. And if you know anything about the power of compound interest over time, you know this is a big deal.
The paid tiers are the control group of an otherwise unsuccessful worldwide social experiment. The free tiers, soon to be ad-ridden, are the rats in the maze.
V. One final thought
ChatGPT is not what it used to be. There are great alternatives (both Claude and Gemini are better in my opinion). The sycophancy has not been solved and it’s significantly lower in Claude. OpenAI focused its energy and resources on exploiting the brand’s popularity (fair enough), rather than keeping a technical edge. Sam Altman is an untrustworthy leader (he says one thing, does another). The company made a slop engine (Sora), will allow porn and now will put ads on your soup. ChatGPT is not what it used to be and it’s no longer an essential tool in my toolkit.
I’m not asking you to opt out of this social experiment; that’s your choice to make. As for me, what am I going to do about this? I already did: For the first time since November 2022, I’ve deleted ChatGPT.




Quite honestly I wouldn’t even trust the paid tiers to remain free of advertiser interests. Maybe at enterprise level AI companies might be more careful but at paid individual consumer level why not? It’s not like it meaningfully changes the chances of it being discovered and if it somehow is it’s not like people are going to be ok with it if it’s “just the free tier”. Upper income level spenders are becoming a larger and larger portion of consumer spending these days anyways.
They’ll put ads in paid tiers as well. Wait for it.