42 Comments
User's avatar
Cole Ogden's avatar

Quite honestly I wouldn’t even trust the paid tiers to remain free of advertiser interests. Maybe at enterprise level AI companies might be more careful but at paid individual consumer level why not? It’s not like it meaningfully changes the chances of it being discovered and if it somehow is it’s not like people are going to be ok with it if it’s “just the free tier”. Upper income level spenders are becoming a larger and larger portion of consumer spending these days anyways.

Alberto Romero's avatar

Agreed. I think this is merely testing the waters. If they don't see much churn but remain unable to cover costs, they might put ads in Plus (Go will have ads and it's paid, just slightly cheaper at $8/month).

@colleenkenny's avatar

this is absolutely true.

Americans in the top 10% income bracket made up nearly half (49.2%) of total consumer spending during Q2, per an analysis of recent Federal Reserve data by Moody’s Analytics Chief Economist Mark Zandi.

That number is up from 48.5% in Q1, highlighting the growing divide between high-income earners and middle/lower-income earners in the country.

Ad based models “enshittify” platforms and degrade users. the goal is to hack into our desires and sell us what we think we lack. the addictive nature of the tech makes it so hard to resist.

AI Governance Lead ⚡'s avatar

That's probably a wise perspective.

alternatyves's avatar

They’ll put ads in paid tiers as well. Wait for it.

Alberto Romero's avatar

Possibly… they seem to be up against the ropes

John Bassler's avatar

Very nice overview of the issue. Thank you.

Dana van der Merwe's avatar

Some deep thinking there, followed by decisive behaviour. I have only been using ChatGPT recently for the occasional pretty picture. I would probably follow your example since Claude and DeepSeek are more than adequate substitutes.

Alberto Romero's avatar

Good choice, Claude is definitely better!

Odin's Eye's avatar

I absolutely agree

Peter W.'s avatar

I am reluctant to use Deepseek on their servers, which are located in China. Their AI engine is quite good, the challenge from my POV is to find a provider in the US or Europe where we have at least some way of holding that provider to its promises for confidentiality and privacy. A local (self-hosted) distill of Deepseek (or others, like Qwen) might be a good, if more limited alternative, but still isn't as straightforward to set up to make it something most of us could just use.

Odin's Eye's avatar

Yes. The sad truth is that we need to be conscious of any queries no matter where the servers are located. We can’t assume privacy

Olivier Roland's avatar

Use Venice.ai, they are strongly privacy first (they don’t have access to your prompts and data) and provide access to the latest DeepSeek and GML models

Cathy Reisenwitz's avatar

I did this a month or so ago. Now I'm a Claude girlie.

Schroedinger's Octopus's avatar

I guess I'm going to do the same; seeing Fidji Simo get on board after 10 years at Meta … well, even if her posts on Substack are nice and personal, with all those billions at stake (and Greg B willing to become a billionaire, and all the others probably).

I bought a Claude Max subscription since December and I'm super happy about it. And Code and Cowork are really great tools.

andrewb's avatar

Users will flow to the last llm that is free and no ads.

Then (as others have mentioned) There will be embedded ads like there is in some netflix/ prime series.

Or you pay for no ads like prime / netflix / ....

Alberto Romero's avatar

Yes, I hope at least one company remains true to their word and keeps the business model a subscription one rather than ads

Priank Ravichandar's avatar

Your point about the true influence of ads being unknowable is so important! Even if an ad is clearly labeled in the app, the model could still be priming users to be more receptive to advertising in less obvious ways. Although the company claims that responses are “driven by what’s objectively useful, never by advertising,” the average user has no reliable way to detect when responses are subtly biasing them.

Pascal Montjovent's avatar

We Europeans tend to view neutral AI as a civic right, while Silicon Valley approaches it as a pragmatic pricing shift. This has created a deep rift where they bankroll the push into the frontier, and we act as its moral referees. I wonder: could our regulatory focus eventually turn into an advantage?

My concern is that if US firms saw users migrating toward "ethical" models, they could try to mirror our legal frameworks to stay competitive. Yet, even then, a legacy oil giant can't pivot to solar overnight. Their entire DNA is built on data extraction and ad-revenue logic, making a shift to a truly "civic" AI structurally difficult for them, regardless of the law.

Then there’s the technical horizon. According to the Stanford HAI 2025 AI Index Report, inference costs are dropping drastically while energy efficiency is improving by 40% annually. If intelligence eventually becomes nearly free, maybe this ad-supported era is just a messy transition rather than a permanent structural decay.

Do you think this "enshittification" is an inescapable trap, or just the temporary friction of a technology searching for its true economic floor?

A Claude fanboy ;)

Kevin Beck's avatar

Great article.

I wonder what this issue looks like to the advertisers. One approach is every time you ask a question, the chatbot does some keyword matching and plops a vaguely related ad at the top of the screen. This is no different from sponsored links in a Google search: rather annoying but also relatively obvious and easy to ignore. Any attempt to go deeper and bias the chatbot's answers to favor an advertiser opens a big can of worms from the advertiser's perspective: what exactly am I paying for? Advertisers are used to paying by the number of impressions - do they pay for each answer that is intentionally biased? How much bias am I getting for my money? What keeps the chatbot from hallucinating outrageous claims about my product that do more harm than good? How do I know what users are actually seeing as a result of the advertising money I am paying? What if two direct competitors both want to buy influence from the same provider?

I think there is less incentive than you imagine for the most egregious forms of advertising. They will be hard to implement reliably and *very* difficult to sell to advertisers.

Alberto Romero's avatar

You're probably right on this. Ads that are clearly labelled as such and separate from the response are not as insidious as ads integrated with the response. Still, those ads will use all the information we've shared with the chatbot and will occupy a significant part of the screen with every response. I don't think I want either of those things. But the most important part is that the business model couldn't be made to work without ads because most people don't see enough value to pay $20/month, which is incredible to me. I guess it's hard for most people to like that much a tool that behaves unreliably.

Peter W.'s avatar

I am glad that I stopped interacting with ChatGPT a long time ago. And maybe we should have taken a closer look at the name and take it for what it apparently is: Chat(!) GPT. Chat is, if the dictionary can be trusted, at best a light conversation, and often enough just gossip. The decision by OpenAI to now openly use ChatGPT as a platform for targeted ad delivery confirmed that we shouldn't entrust it with any meaningful information about ourselves, for the reasons that are well explained in the original post. IMHO, the break of previous promises is a sign of increasing desperation: OpenAI still hasn't figured out how to actually make anywhere near enough money to give investors a meaningful return on investment.

As for the future of OpenAI, I suggest to monitor who is heading for the exit (personnel and investors).

Nick Hounsome's avatar

The key issues is whether or not the LLM uses anything about paying advertisers as input.

The press release implies not and I'm inclined to believe it.

If they do I would expect it to be quite obvious and for OpenAI to be called out on it in a way that would be a marketing and PR disaster.

I suspect that the ads will be selected by post processing the conversation (together with cookies) and I don't see why this should bother anyone (apart from the basic dislike of ads). From the advertisers point of view it should still generate more clicks than the current cookie system used throughout the web and from the users point of view the ads will be more relevant and not affect the output

Alberto Romero's avatar

Why would it bother anyone that OpenAI uses an intimate conversation to better target an ad? Not everyone wants to be sold things according to everything they talk about. Besides, although I also believe what they say (aside of the weak-strong dichotomy I explained), this is a slippery slope. If this kind of ads doesn't work for users (too obvious/annoying) then OpenAI might take it a step further, adding the ads to the conversation in a more insidious way but also less detectable by users. I don't think they will go that far unless... Unless they're on the verge of dying. As a "last resort"

Jochen Madler's avatar

I hate to be the douche here, but you are wrong.

Obviously, its bad that ads are coming to ChatGPT. I also don't like it. Its sad.

But the ads are NOT in the stream of answers. Instead, they are displayed as an extra badge at the bottom, like a pop-up. This pop-up is triggered based on the flow of the conversation - and you bet they optimize for it showing up at the right, revenue-maximizing time.

That said, the ad badge is always OUTSIDE the answer stream. It is separate. Its not woven into the answer. Your example with ChatGPT answering headspace.com in the LLM stream, and the user not knowing whether its paid or not - this is just plain wrong! If headspace would be sponsored, the badge at the bottom would pop up.

I'm interested to see how you will update the article. Given this new information, I think your argument might soften up.

P.S.: This is based on the publicly available information from OpenAI as of Feb 15, 2026.

Neill Killgore's avatar

Google already makes claims that their ads improve the user experience and objectively improve their search results.

How long before OpenAI runs some internal A/B testing and can show that users prefer the ad-enabled responses? Nevermind that the responses could be given longer to respond or additional context or any number of other invisible advantages.

AJ Solaris's avatar

Good for you. I deleted it as soon as Sora 2 came out. What a complete waste of resources and lack of imagination. If we are to experiment with these AI tools and try to integrate it with our future society, we need to choose companies that are at least thinking about human alignment and safety.

Carlos Guadián's avatar

Hi Alberto, I appreciate how you point out that the introduction of advertising changes the relationship between the tool and the user, creating a divide between those who can pay for a “clean” experience and those who are first exposed to commercial interests, which ultimately erodes trust in the service. This split between AI-rich and AI-poor users connects with the idea of a new labor aristocracy (https://carlosguadian.substack.com/p/perteneces-a-la-nueva-aristocracia

), where unequal access to advanced AI tools amplifies the economic advantages of a few while leaving others at a competitive disadvantage.