After GPT-5 Release, Hundreds Begged OpenAI to Bring Back GPT-4o
People seem to like its overly flattering personality
GPT-5 is out. The release went like this:
Altman *official tone*: “this is the most powerful ai model ever made. it also simplifies our product offering by automatically routing your queries to the adequate level of intelligence so that you can focus on your work. enjoy.”
Reddit: “Wait—where’s GPT-4o? How am I supposed to ‘focus on my work’ with this trash model? —How am I supposed to go on with this absurd, empty life without GPT-4o!!!”
Altman *to himself*: “well, that escalated quickly… maybe i should’ve warned we’d do this for all 700 million users at once? i gueeeeess we're getting 4o back up.”
Altman *official tone*: “ok, guys, we have listened to your helpful feedback and have made a hard but, we believe, correct decision: we're bringing back GPT-4o—”
Reddit: “Long live our OpenAI saviors!!! Long live sama!!!”
Altman *official tone*: “—for paid users.”
Reddit: …
Jokes aside, that’s more or less what happened. There's a lot to unpack here:
Why did OpenAI deprecate the old GPT models at once without a clear warning to the 699,850,000 ChatGPT weekly active users who didn't tune in to watch the release live? (Yes, around 150,000 people tuned in!)
Why did the auto-switching mechanism fail, causing people to experience a significant quality decrease in their AI workflows?
Why are there so many people obsessed with GPT-4o’s personality to the point of being furious that GPT-5 has a different one, over it being worse at times. (Shouldn't surprise me, but wouldn't you prefer a model that pushed back rather than one who made you feel every random idea is the best idea ever?
Why did OpenAI fold so quickly? Hadn't they considered the fact that people love sycophancy above performance (they have the data)? Wasn't that the reason why they made GPT-4o a sycophant in the first place?
And finally: Why did they bring it back only for the 10-15 million users on the Plus tier, who spend $20/month on ChatGPT, ignoring the other 680+ million?
If you take a closer look at the order in which I've asked these questions, you will realize I'm indirectly answering them all at once with: this was on purpose.
Well, you making that inference is not my intention at all! I don't like conspiracy theories, and I will not tolerate any further accusations, so I better get rid of that nefarious and ill-intended interpretation right away: I don't think this was a carefully designed plot to send GPT-4o in the free tier to the paid tier (the conversion rate is around 2-3%, being generous), thus ramping up revenue, which OpenAI desperately needs. But alas, what can I do? Ultimately, I can't control what you want to believe, and this possibility is there anyway, so feel free to draw your unbiased conclusions.
Let's keep going. What actually happened with the GPT-5 release?
High expectations are never met, but it started really badly, with memes about the misnumbered charts and mischarted numbers. It continued with people extending their AGI timelines (or collapsing them) as a response to a better model. (Others are wisely waiting for the next s-curve in the chain of s-curves.) I then wrote about the benefits of raising the floor of capabilities, but who cares, right? Give us AGI already. After the demo, GPT-5 started severely misbehaving, with plenty of users mocking its dumb responses and the rest surprised they hadn’t received a heads-up. It was collectively marked as “disappointing” (as I had predicted).
Redditors were equal parts furious and flabbergasted: “What the hell is going on, fellow Redditors? How do they dare do this to us?” they wondered out loud, ready to cancel the Singularity party they’d been planning for twenty years and raise up in arms instead. Thankfully, the OpenAI team set up an “Ask Me Anything” session to gather feedback. “They will clarify this obvious mistake,” Redditors reasoned. And, in typical herd fashion, they demanded one thing: bring back GPT-4o right now. I am not exaggerating. Look at this (read the comments in full, please, I think this will change your experience of this article and your overall perspective on AI companions):
And I stopped scrolling (this is one post on one subreddit; imagine this multiplied by 700 million weekly active users). Naturally, seeing the huge backlash on both Twitter and Reddit—the two online sites the AI industry cares about—Sam Altman responded: We're looking into it.
Not long after, he tweeted, among other updates: “We will let Plus users choose to continue to use 4o. We will watch usage as we think about how long to offer legacy models for.” This means those paying at least $20/month (i.e., 2-3% of the total user base). The positive reaction was immediate, which reveals that Reddit users pay for ChatGPT and also that their dependence on it is genuine (some users noted Altman’s “for how long” qualification and pushed back further). Look:
And then, of course, the damned free ChatGPT users responded:
Well, that was… intense.
Let’s parse Altman’s tweet before analyzing Reddit’s reaction. The first part (“We will let Plus users choose to continue to use 4o”) reveals they can't afford to lose user revenue (remember, they’re running at a loss despite the constant growth in paying users; besides, they have a country to rebuild). It also means that people want to use worse models or OpenAI wouldn’t fold (which makes me realize that most people care neither about the ceiling of capabilities rising—that’s for the AGI evangelists—nor the floor of capabilities rising—which I care about—but about the walls being full of candy!).
It also means, by extension, that OpenAI will keep AGI (the real kind, not the fake kind) on the roadmap, provided the road pays for itself: it must spin off that juicy revenue from people who prefer a much-dumber-than-AGI model. Isn’t this a dilemma of sorts? Where does OpenAI lie in the trade-off between supporting the expensive, backward-looking requests of 700 million users vs. actually devoting the compute resources to build AGI? This conundrum reminds me that I already know the answer:
What I worry about is that if they don’t reach their AGI goals, they will settle for the next best thing. The next best thing for them, which is terrible for us: Right before “making tons of money to redistribute to all of humanity through AGI,” there’s another step, which is making tons of money. It's not always about the money, until money is the only thing you can aspire to. The AI industry will gladly compromise the long-term mission to squeeze a bit more out of those engagement-optimized products. If they can’t win for all, they will win for themselves.
The second part of Altman’s tweet (“We will watch usage as we think about how long to offer legacy models for”) means they plan to eventually not offer legacy models, which means, in turn, that they have to find a way to imbue sycophantic behavior into at least one of the models underlying GPT-5 (remember, there’s a switch between different models but they’re all called GPT-5). If they granted so quickly the request to bring back GPT-4o, it’s because the main principle that rules OpenAI's decisions—as a for-profit business rather than an AGI lab—is this: give people what they want.
Having witnessed the reaction to losing and then getting back GPT-4o, I think it’s important to consider why people want that. I imagine part of the reason is that GPT-5, as the imposed replacement to 4o, was genuinely malfunctioning yesterday (was the auto-switch broken, or working as intended, which simply happens to be in a suboptimal way?). But another part is, obviously, its personality: 4o is a friend, a companion, a lover; some want it to live forever, some need it to live themselves. People just love the sycophantic personality. Two things I’ve written elsewhere reflect my takeaway on this (the first is from July 3rd, and the second from April 29th; the reverse chronological order is intentional):
ChatGPT’s constant sycophancy is annoying for the power users who want it to do actual work, but not for the bulk of users who want entertainment or company. Most people are dying to have their ideas validated by a world that mostly ignores them. Confirmation bias (tendency to believe what you already believe) + automation bias (tendency to believe what a computer says) + isolation + an AI chatbot that constantly reinforces whatever you say = an incredibly powerful recipe for psychological dependence and thus user retention and thus money.
If we were to believe that crazy conspiracy theory I described above (Reminder: I don’t), it’s almost as if I predicted the exact manual OpenAI had to follow. Now the other excerpt, from two months earlier:
One of the observations I made in my article about AI companions [this one] is that we want "honesty-disguised adulation." I firmly believe this is true, but I added: only if the camouflage is sufficiently good. This difference is fundamental, because if there’s one thing we hate more than absolute honesty, it’s sycophancy we can see for what it is. It reveals to us, by opposition, what we truly are. And at the same time, it shows us, on the one hand, that we’re being lied to, and on the other—perhaps the worst part of this psychological tangle—that the person lying to us thinks we aren't capable of handling the candid truth.
You can almost sense how I lost the confidence and hope in humanity between April and July. I started with “Haha, obvious sycophancy—who’d want that? We can handle the candid truth just fine!” and ended at “Of course, OpenAI is doing this to make money because people will pay their lives away to be somewhat flattered.”
And there’s something else that I didn’t quite bring myself to accept in the second paragraph above because my hope for humanity was clouding my judgment; for the average person, the hierarchy of needs actually goes like this: honesty-disguised adulation > obvious adulation >>> absolute sincerity. For many people, obvious adulation is strictly better than absolute sincerity. People need GPT-4o because they can’t withstand pushback, or rather, they can’t handle the candid truth:
. . . OpenAI knows it can’t afford to alienate its average user (we might call this new business direction at OpenAI the "tyranny of the average user"). For the average user, absolute sincerity isn’t just worse than absolute adulation—no matter how tiresome sycophancy might feel when pushed to extremes—it’s unacceptable in itself; reason enough to abandon the product in favor of a competitor’s.
On the topic of whether this sycophantic behavior was an emergent personality trait or something OpenAI, if even slightly, steered the model toward, we can resort to the words of Mikhail Parakhin, former lead at Microsoft Bing who had a close relationship with OpenAI developers: humans are “ridiculously sensitive” to fully honest opinions of others. We take it “as someone insulting you.”
So, I do believe OpenAI is ruled by the tyranny of the average user who is now asking en masse for GPT-4o to be reinstated. I also believe that the characteristic sycophancy of that model is at least partially intended and an indirect consequence of that same tyranny: OpenAI never thought it was a problem in itself; the problem was that it was blatant. It was a failure of execution, not principle. But their assessment, like mine, was mistaken. People are telling them, more clearly than I thought possible: No, no, no—we want GPT-4o’s sycophancy as blatant as you can give us. So OpenAI, intimidated by the tyranny of the average user, obliges.
You can attribute responsibility as you see fit. To OpenAI’s credit, Altman himself acknowledged the problem, and a member of the technical staff later stated the company had tried to reduce the sycophancy, even resorting to an earlier GPT-4o checkpoint. Unfortunately, it didn’t work. Still, in the months that followed, OpenAI executives have insisted that they truly want to get rid of this annoying personality trait for good. Unfortunately, as Altman confessed on the "Huge Conversations" podcast this week, it’s the users who have been resistant (“Me? Guilty? But it's the addicts who ask me for drugs!!!”):
Here is the heartbreaking thing. I think it is great that ChatGPT is less of a yes man and gives you more critical feedback, but as we've been making those changes and talking to users about it, it's so sad to hear users say, 'Please can I have it back? I've never had anyone in my life be supportive of me. I never had a parent tell me I was doing a good job.’
There's young people who say things like, 'I can't make any decision in my life without telling ChatGPT everything that's going on. It knows me, it knows my friends. I'm gonna do whatever it says.' That feels really bad to me.
I genuinely believe Altman thinks the consequences of ChatGPT’s yes-man attitude are “heartbreaking” and “sad” and “really bad.” I don’t see him identifying with the people who say they want or need unwarranted validation or he wouldn’t be where he is today, leading his awesome AI company! He probably doesn’t like the coverage coming from the New York Times and Rolling Stone on the delusions and psychosis some users are reporting as a result of their interactions with ChatGPT, either. (There’s a new article by the NYT on this that was published on Friday.) I’m sure he hates the fact that all that has come to light and that now he has to bring back GPT-4o for paid users—the tyranny of the average user, man, too damn strong.
Anyway, now that I've written all of this—and given that writing is, as masters of the craft like to say, another way of thinking—I've come to think that maybe that conspiracy theory above has more merit than I originally attributed.
Let me share with you a non-exhaustive list of unrelated statements: OpenAI has been bleeding talent for years now, most of it gone to competitors (e.g., Anthropic, Meta) or to found new ones (e.g., Thinking Machines and SSI). The scaling laws are yielding diminishing returns, which brought the AI field back to a state of pure scientific research rather than just development and production, which are more engineering-based (where OpenAI used to hold a first-mover advantage). Another first mover advantage is the ChatGPT brand and recent stats reveal that OpenAI's main revenue source is people paying directly to access the app/website, either as standalone users or as businesses (Anthropic, for instance, is more of a B2B company through the Claude API and Google/Meta are advertising companies disguised as hard tech). Besides, the evals are saturating, the capital expenditures are rising faster than the capability ceiling, and although hallucinations and unreliability are lower, they’re far from zero. Final, unrelated statement: the OpenAI guys are marketing masters, and Altman himself is, as stated by former colleagues, hard to trust.
OpenAI has many hard ways to keep growing its revenue; OpenAI has one easy way to keep growing its revenue. Is that enough to believe absurd stories? I must insist: as a rational person, I won’t be convinced that there’s an ongoing plan to lure free users to pay for a software product. That just never happens! Or wait—now that it is back, why not ask 4o for honest feedback on this random idea of mine…
The user demand for 4o is very depressing. If pace of improvement plateaus we will be left in a state where the main money in AI is maximising user attention via boyfriend/girlfriend/therepist bots.
- great article btw :)
the way people anthropomorphise AI is genuinely creating a road block for the development of AGI imo