What Did You Think Getting Closer to AGI Would Be Like?
Some reflections before we reach the tipping point
This post is a follow-up to OpenAI’s release of o1 and o1 Pro, but it’s more about the trajectory—as Noam Brown likes to say—than this specific model.
It’s about artificial general intelligence (AGI), and what the road there will feel like. After skimming through the reactions to o1, I’m starting to realize that most people haven’t thought through how the road to AGI will make us humans feel, especially as we near the finish line.
Before I get into the examples, let me share three important caveats.
I. OpenAI o1 is not AGI
Although this post is about AGI, I’m not saying o1 is AGI, that it hints at AGI, or that it’s a necessary step toward AGI. I’m not even saying it’s a directionally correct step (no one knows).
OpenAI has placed its bets on extending the power of large language models with a new set of scaling laws that focus on making the models “think” when answering by allocating more computing power at test time. That’s an ok bet—one worth keeping an eye on—but like all bets, it’s not certain.
I think fundamental elements are missing in the intelligence equation as AI companies are posing it today. I believe the new scaling laws will surprise many people but will eventually find a ceiling—like the current ones did—before getting to AGI.
Approaches like Yann LeCun’s, Francois Chollet’s, or Fei Fei-Li’s are at least as appealing to me as scaling LLMs.
II. No AGI is dumb at times
If an entity's intelligence is too uneven—struggling more than a 5-year-old with some tasks while excelling beyond a PhD in others—then it doesn't qualify as AGI.
No general intelligence is bound by Moravec’s paradox: Easy stuff and hard stuff are all just stuff for an entity that is all-intelligent.
By definition, an AGI exhibits transversal consistency in its intelligence. It may not excel equally at all problems, but its performance is marked by a distinct low variance: It doesn’t swing between behaving like a toddler one moment and Einstein the next but maintains the steady competence of a typical smart human adult at all times (including on areas where a human adult, not being an AGI, would fail).1
III. AGI is not closer today
I’m not saying AGI is close or o1 got us closer, or something else inspired by o1 will get us closer. I believe AGI has stayed at the same point in time since OpenAI proved the GPT concept (i.e. every year we’ve made exactly one year of progress toward the AGI goal, meaning that my timeline hasn’t shrunk by recent events).
I’m still on the fence about whether the new scaling laws actually get us closer to AGI (I mean, they do! but perhaps they’ll find a true wall down the road and be forced to return to old ideas, which is, in practice, getting further from AGI).
In any case, o1 isn’t a clear milestone in that regard: The ceiling grows—as proved by the benchmark performance on AIME, Codeforces, and GPQA—but the floor remains disturbingly low, at a worth-dunking-on-it level.
IV. Some beautiful examples
So, to sum up, this post is about o1 and about AGI but I’m not drawing a tight relationship between the two. The only association I want to underscore here is a rather superficial one but, I believe, important to notice: what we humans will feel about o1 at its best is not different from what we will feel about AGI.
Even if o1 is not AGI or a significant step toward that goal—even if it is, god forbid, an off-ramp—it can tell us something about how it feels like to live in a world where AGI is actually really close. Because the main characteristic of AGI (that almost no one dares talk about) is that it will make us feel small.
This, as shallow as it is written down, has important implications. To make my point clear, I will share several examples I’ve gathered from X and elsewhere on o1’s abilities. Then, I’ll comment on what the road to AGI looks like from the average human standpoint. Spoiler: It’s not beautiful.
REMINDER: The Christmas Special offer—20% off for life—is running from Dec 1st to Jan 1st. Lock in your annual subscription now for just $40/year (or the price of a cup of coffee a month). Starting Jan 1st, The Algorithmic Bridge will move to $10/month or $100/year. If you’ve been thinking about upgrading, now’s the time.