Don’t you find it annoying that the CEOs of AI companies are constantly playing both sides? They hype AI to raise expectations only to try to calm the mood afterward, once enthusiasm spirals out of control.
That’s what OpenAI CEO Sam Altman does all the time. He’s done it again.
Sam Altman’s intentionally confusing messaging
In 2023 he said things like “AGI is gonna be wild” or that OpenAI’s 2024 products will make GPT-4 “look very quaint.” Now he says AGI will change the world but “much less than we think” and GPT-5 “will be okay,” as he told journalists at Davos earlier this month.
I have a question: What’s he playing? It’s a rhetorical one because we already know. He’s just swinging the public’s sentiment on one hand while on the other he manages his business. Nothing unusual although rarely done with such mastery and smoothness. Altman plays the press like no other tech CEO.
The non-rhetorical question is this: Which one is the truth?
Will GPT-5 be “much smarter and provide more functions than previous models,” as he later said during another interview at Davos, or merely “okay” (not the adjective we want to hear)? Will AGI bring about a post-scarcity world of wealth abundance or change the world “much less than we think”?
Nuanced takes are prohibited by marketing laws
The problem with hype (and anti-hype, for that matter) is that it is the enemy of nuance.
You can’t generate over-the-top interest and attention with level-headed arguments. You need slogans. Altman knows this very well so he avoids weak opinions. His stance often feels balanced because he juggles emotionally-opposite yet strength-comparable takes all the time. He’s the epitome of “strong opinions, weakly held.”
That’s not what you want from the fire-bearer of the gods.
He throws around crazy techno-optimistic predictions, like when he says that AI will help us “capture the light cone of all future value,” or that our destiny is to upload our minds and “live forever in computers” and then say before the Senate that “if this technology goes wrong, it can go quite wrong.”
So, which one is it?
He doesn’t know. He’s just strategizing with a carefully designed hype-anti-hype balance. As we approach the moment of truth, however, these claims won’t rise to the occasion. Results can’t ever fulfill expectations inflated with exaggeration, positive or negative. AI veterans know that very well.
Altman has been hyping his GPT products and especially our efforts on the road toward AGI too much lately. That’s why I think he’s now tempering his tone and his forecasts because the time has come to swing back the pendulum.
He said at Davos that “There has been no agreement on the definition of AGI,” which is as evasive as he gets to respond to the question of when will AGI arrive. Last year, however, OpenAI published this blog post.
Saying that AGI has no definition is a level-headed response—one his skeptical discoursive adversaries would give—but it sounds weird coming from him. Can you say that about AGI to avoid talking about the topic when your company published last year another blog post saying superintelligence —an AI-based entity much more powerful than AGI, also without a consensual definition—will come true in 10 years?
Nuance comes off as suspicious when your game is the hype.
So, again, which one is it, Altman?
Is the hype too high or the results too low?
But let’s focus on the news… GPT-5, an “okay” product? I was skeptical of the Q* narrative and the latest rumors about GPT-5 being “the first of its kind.” Now I’m skeptical of Altman’s moderation.
Of course, the question that follows is whether Altman is softening his discourse because the hype is too high or because the results are too low.
Saying GPT-5 will be “okay” (which admittedly can be interpreted as “well… not as bad as the others” or simply as “good”) is not what OpenAI supporters want to hear. They want AGI already. They expect it.
Perhaps it’s a mix of both. Altman knows we’re expecting too much and because of that, we might be “begging to be disappointed,” as he said about that “GPT-4 will have 100 trillion parameters” meme (it actually has ~1.8 trillion).
He also surely knows something we don’t. Perhaps GPT-5 is better than GPT-4 but not as much as we’d like it to be (although recent rumors, this time from OpenAI staff, not random anons, suggest otherwise).
GPT-4 has remained on top for too long (perhaps not for much longer). It might be harder than he thought to build significantly better technology. Another option, potentially a heartbreaking one, is that we might have entered a plateau (some experts disagree).
It would be a good thing if…
GPT-5 being less than we want isn’t a good thing. Altman being forced to tone down his claims isn’t a good thing.
What I think would be a good thing is if we left AI alone. It’d be a good thing if we allowed it to grow, develop, improve, and blow us away, without imposing onto it crazy expectations that it won’t ever have the opportunity to fulfill.
The first step to get there is in the hands of people like Altman. Don’t play games. Be more honest. You can choose the marketing path but its long-term is uncertain and AI’s old demons—annoyance, tiredness, and distrust—await.
Let me end with a reflection.
We adapt fast. The only time window when we can feel that kind of inexplicable awe that we’re looking for in the next generation of AI tools exists before that adaptation happens and after our expectations have been surpassed. Don’t let your unreasonable expectations break that window before it’s opened.
It’d be a good thing if however good GPT-5 turns out to be, we could genuinely exclaim: wow.
A further factor may be to avoid a backlash - the better 5 is, the more fear it will evoke in certain quarters. So downplay, with the hope of not provoking another 'pause' movement.
Hope springs eternal in social work. I guess the same applies to truth in AI marketing.