There’s a new trend in AI. One that goes against the lessons from the last four or five years of constant AI news: Instead of promising the future through hype—which works just fine—thought leaders and influencers are anchoring us in the past.
Here’s a hint: “Is robotics about to have its own ChatGPT moment?” or did Figure make it happen already? If not, perhaps Tesla will finally have its full self-driving ChatGPT moment. I’m not sure, though; Musk is prone to oversell. But Suno is surely “A ChatGPT for music” and, well, Udio is “the other ChatGPT for music.” In case we didn’t have enough, Sora is yet “another ChatGPT moment for AI.”
What a year, huh? And it’s only April!
What’s going on, is everything suddenly having its ChatGPT moment? Robotics, self-driving, music, and video—are those areas as mature today as language modeling was in late 2022? Allow me to disagree. And it’s funny, right, because not even language modeling in late 2022 was ready for a ChatGPT moment. ChatGPT itself came as a surprise even to its makers. Ilya Sutskever, possibly the greatest mind in AI in the last five years, thought people would find it “unimpressive.” It just wasn’t very good.
But it happened so I have an alternative reading for why people are calling it for all these other branches (some of which are just being born): Drawing the “ChatGPT moment for X” analogy drives clicks. That’s it.
But let’s be generous and assume good intentions behind the evident (not even covert) incentives at play. There are two scenarios where those predictions are true: Either “a ChatGPT moment” refers to an immense growth in popular interest in these other areas or a comparable milestone in technical advances.
A ChatGPT moment in popular interest
Strong call but possible. Is it true? Perhaps… not:
Negligible interest in all of them. It was trivial to dismantle this interpretation. In terms of popular interest, nothing except ChatGPT is having a ChatGPT moment.
Then, why are so many people being so emphatic about it? Because, counterintuitively, hype isn’t necessarily about painting a brighter future. It can be a way to reinforce a past to cling to.
When people are excited about something that hasn’t yet happened, talking about the future pays off (like me talking about an article that isn’t out yet, about an AI model that isn’t out yet either). However, in the rare cases where people get their attention and interest saturated by something that has already happened—like ChatGPT—the incentives shift. Instead of talking about surprising things that are coming but no one cares about—what’s Udio anyway?—what pays off is making constant callbacks and references to the familiar thing everyone knows about, even when the state of affairs has moved on.
I’ve been thinking about that:
The best writing exists between the comfort of familiarity and the attractiveness of surprise. Find the sweet spot and watch the magic unfold.
I firmly believe what drives the AI narrative—the same thing that drives audience interest in what we write—is shaped by human psychology. Nir Eyal wrote about this in the “California Roll Rule.” Here’s an excerpt:
The lesson of the [popularity of the] California Roll is simple—people don’t want something truly new, they want the familiar done differently. Interestingly, this lesson applies just as much to the spread of innovation as it does to tastes in food.
It also applies to writing and AI. And to anything where human interest plays a role.
The sentence “A ChatGPT moment for music” is the sweet spot between the familiar and the surprising, a sure hit in our receptors of fascination and excitement without weirding us out with something beyond our understanding. It serves as an anchor to stabilize the narrative and make it appealing.
But an anchor is also a stopping mechanism. This is a problem. To understand why we need to go to the second scenario.