The Algorithmic Bridge

The Algorithmic Bridge

Why Industry Leaders Are Betting on Mutually Exclusive Futures

No one has a clue what comes next for AI

Alberto Romero's avatar
Alberto Romero
Dec 15, 2025
∙ Paid

I. The Sutskever-Karpathy conundrum

Ilya Sutskever and Andrej Karpathy are geniuses.

Both were founding members of OpenAI and are now, in what can only be described as a “wise move,” ex-employees. They belong to the younger generation of AI experts but are as respected as the fathers of the deep learning revolution and twice as beloved by the community. They believe AI is neither a waste of time nor of water and are confident that at some indeterminate point in the future, machines will obtain superintelligence. If you want to know what is coming next in AI, you have to follow the projects they’re currently working on. Unfortunately, their similarities end here, for those projects, it turns out, have taken them down quite dissimilar paths.

Sutskever started his journey as the disciple of one of the most prominent AI scientists in the last 50 years, Geoffrey Hinton, a Nobel Laureate and Turing Awardee. From the get-go, he was a strong candidate to become the AI wunderkind (he did, going to Google Brain in 2013, then founding OpenAI in 2015). Karpathy became well-known—and was taken as a mentor by so many who, like me, decided to study AI after the resurgence of neural networks in 2012—for his Stanford computer vision course CS231n.

Karpathy’s love for education was, at least, as powerful as his interest in AI, cementing him as the personification of an open door to that world. Sutskever, deeply praised from early on even within the academic stratosphere, kept his sight on the horizon. One grounded in the most useful kind of pragmatism (teaching the younger generations), the other looking at things that we, mere mortals concerned with our mundane affairs, couldn’t foresee.

Their paths crossed briefly in 2015 at the foundation of OpenAI. After a multi-year hiatus that Karpathy spent at Tesla under Elon Musk, they joined forces again at OpenAI in 2023. But the joint was more in name than action. Sutskever spent his time trying to figure out how to solve the great puzzle of AI alignment (how to make superintelligence harmless), whereas Karpathy was building “a kind of JARVIS.” It would seem incoherent for OpenAI to task one of its most brilliant minds with nothing short of “how to keep a silicon god chained forever” while the other was trying to build a virtual agent with whom to conduct research together. From our myopic perspective, both those goals are extremely ambitious. But a serious analysis reveals, however, that the epistemic and spiritual distance between them is that between earth and heaven.

Or, in other words, if you believe god is coming to visit, wouldn’t you devote all your resources to prepare the best welcome? If you don’t, should we take that as a hint that you are not actually that confident it’s going to happen at all?

The differences between Karpathy and Sutskever became stark beyond their goals inside OpenAI. Sutskever, after he participated in the failed board coup in November 2023 to dethrone Sam Altman (that we eventually learned was a mix of safety concerns and a pivot by OpenAI into a traditional product-focused company, with a pinch of Altman’s idiosyncratic lack of candor), became an invisible name. No one talked about him; he didn’t talk at all. Until he left to found Safe SuperIntelligence in June 2024. Karpathy, on the other hand, left OpenAI on good terms. He wanted to dedicate time to his other passion: making videos to teach others. He combined both—AI and education—into Eureka Labs, which he founded in July 2024.

That’s another important divergence: Karpathy’s startup is “AI will be your tutor,” while Sutskever’s is “AI will be your god.” Note this isn’t a value judgment on their approaches to AI. I believe both are interesting and both can be useful. What I find strange is that one’s validity sort of invalidates the other and vice versa. If it makes sense to build a tutor, then it must imply that god is not coming; if god is indeed coming, you don’t care about building a tutor. Something doesn’t add up.

One charitable explanation is that they’re covering different timescales.

Maybe both think education is valuable and the machine god is coming sometime from now, and simply have chosen to focus on opposite ends of the spectrum between now and then, whether it’s three years or thirty. But this would make sense if they both belonged under an institution—like OpenAI—allocating its resources intelligently and in a coordinated manner. However, now that both Karpathy and Sutskever are independent and independently pursuing their respective interests (and/or concerns), this can’t be the explanation: if Karpathy thought with a high degree of confidence that a superintelligence would emerge in a short-ish timeline (say, a decade, which is long-ish for Silicon Valley, but short-ish for the rest of the world), he wouldn’t have started Eureka Labs in the first place.

The disparity, then, must be reduced to a difference of belief. And from that difference of belief, assuming they’re approximately equals in intelligence and knowledge but not in metaphysical affinities, must therefore follow that they simply don’t know. What was thought to be a political, logistical, technical—perhaps ethical—question is suddenly a matter of personal predilection.

Something that I believe is that an absolute truth exists beyond our senses, a reality undistorted by our imperfect perceptions; like a celestial body hidden behind clouds, this truth remains constant, regardless of our ability to see it clearly. (Something akin to Kant’s noumena or Plato’s world of Forms.) Any relativism stems not from the nature of reality itself, but from the limitations of our senses and reason. Therefore, the divergence between Karpathy and Sutskever reveals to me an intriguing contradiction: two experts, shaped by similar academic and industrial backgrounds, should share, by virtue of this common truth, a more aligned vision of their field’s ultimate outcome. Not because of some mystical insight into the unknowable—they are, after all, only human—but because their parallel paths should have guided them to similar conclusions.

Had they caught even the feeblest glimpse of that elusive outcome, their minds would have sculpted its shape in harmonious accord, but given that their current plans are mutually exclusive, I can only conclude that they remain clueless sailors in darkness; one, hopeful, charting course through familiar waters; the other, bold, preparing for a tsunami that would render such voyages obsolete.

This divergence is, I’m afraid, not limited to the Sutskever-Karpathy conundrum.

You can subscribe, so we can “not have a clue” about what’s coming together

II. Industry, academia, investors: no one has a clue

Keep reading with a 7-day free trial

Subscribe to The Algorithmic Bridge to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Alberto Romero · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture