8 Insights to Make Sense of OpenAI o3
Don’t go into 2025 without understanding this, or you won't understand anything
OpenAI o3 shocked everyone with its impressive performance on some of the hardest math, coding, science, and reasoning benchmarks. Above human average on all of them; beyond specialists’ ability and knowledge on the majority.
You don’t see this kind of state-of-the-art leap every day. You don’t see it every year either. No competitor seems close right now. The only other two times I remember being as surprised by an AI breakthrough were AlphaGo in 2016 and GPT-3 in 2020. I guess big events have a four-year cycle—AGI for 2028, here we go!
Jokes aside. We don’t know if OpenAI’s demo was a faithful representation of o3 (we won’t know until independent testing is conducted after the model’s release, expected in Q1 or Q2 of 2025). If it was—an assumption I will make here—being shocked by it is easy (if you aren’t, I urge you to read this). What’s hard is understanding the implications of its existence.
I’ve compiled a few questions you should know how to make an informed guess on to navigate what’s coming—if you can’t, you have to read this article (or a similar one).
How does o3 change the rules of the game? What’s the next point in the trajectory that o1 → o3 foreshadows? What can you personally expect in 2025? And later? Will the world end? Will you lose your job? Will nothing happen? Will you be able to access these more powerful AI models normally? Will they become too expensive? Will costs go down or up? What about OpenAI’s competitors, will they follow suit? Who has fallen behind? Do the new scaling laws work? Did the old ones plateau? How do those two trends come together? Can we trust OpenAI? Are the benchmarks reflective of what they say to measure? Do we need new ones? How do the “o” series differ from the GPT series? Is o3 generative AI? Is it a new thing yet to be coined? Is o3 a general intelligence (AGI)? What is it missing if not?
So many questions but so little information, insight, and context to answer them. This article aims to be a resource to alleviate that gap, a light in the darkness of the mysteries that lie ahead. It is somewhat easy to follow progress when it occurs in a continuum. When it takes unexpected leaps or direction shifts, you need someone with a map.
I won’t give you definitive answers because I don’t have them. I will try to give you some insights so you can learn to see. It’s more useful to teach someone to fish rather than fishing for them. Consider this fishing rod my Christmas gift. I wish you a prosperous next year fishing for the answers we are all seeking today.
Here are the insights divided into eight sections:
The scaling laws of inference-time compute work
OpenAI o3 is every bit as impressive as it seems
The schism between the AI-rich and the AI-poor
Generative AI is no longer the frontier
Average users don’t need to worry (for now)
No general intelligence is dumb at times
Benchmarks are imperfect metrics for room readers
Some competitors will catch up; most won’t
REMINDER: The Christmas Special offer—20% off for life—runs from Dec 1st to Jan 1st. Lock in your annual subscription now for just $40/year (or the price of a cup of coffee a month). Starting Jan 1st, The Algorithmic Bridge will move to $10/month or $100/year (existing paid subs retain their current rates). If you’ve been thinking about upgrading, now’s the time.