What No One Outside OpenAI Can Really Understand About OpenAI
It’s 2023. You’ve spent four years (or five or six, you can’t remember), working on something—one big goal. You always knew what you wanted (the happy end of history, or so you liked to think). And you knew why you wanted it. You were relentlessly and tirelessly trying to figure out the how.
Despite you were merely taking shots in the dark, your steely determination triggered a global paradigm shift. You had a discipline, luckily for you, blessed by the gods. You didn’t believe in those—you don’t believe in them now either—but your faith, placed somewhere else, was strong nonetheless. That sufficed.
Back in 2017, the days are much longer and boring. Nothing happened yesterday despite your efforts. Nothing, probably, will happen today. You try anyway because there’s not much else besides trying.
The morning started like any other but suddenly, in an unexpected turn of events, a door leading somewhere magical unlocks before you. Not fully understanding the implications of your discovery you cross the doorway, never to look back.
It’s 2019. Driven more by confidence in your instincts than faith you take another jump forward. Again, that uncanny sensation runs through your body—not of mere achievement, but of pure epiphanic bliss. Eureka!
A few years go by—2023. You’re at the vanguard of the world. No one else is even trying to do what you are. You’ve become convinced beyond a shadow of a doubt that the one big goal you once set up—and the longing for it—is about to be fulfilled.
You look back. Six years pass through your mind in the blink of an eye. Thoughts of incoming world-changing events fill your focused mind, blurring not only the future you naively perceived crystalline but also your conviction about the path you so carefully planned. People can’t begin to imagine what you’re about to unleash. You can’t either.
It doesn’t matter. You won’t stop now. You wanted this.
Steven Levy’s feature on OpenAI, “What OpenAI Really Wants,” is wonderful.
Whether you like the content or not, the way it’s written serves as a window to the soul of the company—not as an abstract entity, but as a group of people, like you and me, chasing something with all their might.
In particular, Levy revealed a detail that, as far as I know, has never been published before; how OpenAI lived, from the inside out, the discovery of the power of GPT, and how they foresaw what was about to come.
Levy describes how Alec Radford’s continuous efforts to build a generative language model and Ilya Sutskever’s immediate realization of the value of the transformer—how it’d eventually change everything in AI—was the beginning of something big for the company. Something bigger than them—bigger than we all.
Inspiration struck me the moment I read those lines. The firsthand testimony of OpenAI researchers about how they experienced that epiphany made me realize something I had never thought about before.
Those lines you just read in the intro—that was not a made-up story. They lived that.
They solved an impossible quest with the conviction of someone who has had a revelation. They embarked on the quest for greatness armed, as if moved by a self-fulfilling prophecy, only with willpower and determination.
Radford and Sutskever. And Altman and Brockman. And the others. They did not only build GPT, GPT-2, GPT-3, GPT-4, and of course ChatGPT. They lived through it—in the most personal, private, and tangible sense of the word. It wasn’t the value of the achievement that set them apart but the intimateness with which they went through it.
They experienced the crushing mistakes; the endless attempts; the constant trial and error; sleepless nights and restless days; failing and getting up; failing again and getting up again—for years. Driven solely by the desire to attain The Goal and moved more by hope than certainty—without even the calming confidence of knowing how.
The rest of us, who witnessed the results from afar—either with the intention to point out the flaws of their approach or praise their breakthrough—didn’t really know what that was. We thought we did, but we don’t. We can’t know.
They—the AGI believers, to use their wording—working at OpenAI at the time had an emotional proximity to the project that the rest of us can, at best, hope to simulate in our minds. Like when someone tells you about a dream and, although you can sense their emotion, neither the words they utter nor the images in your mind do it justice.
The group of those who can’t know includes, by definition, all critics of OpenAI's methods or beliefs. And of its achievements, too. That tacit insight OpenAI got is unreachable for them. I’m not saying criticisms are useless (they’re somewhat helpful), but now I understand they likely will never find their intended destination.
Because OpenAI researchers don’t care—they don’t care in such a profound way that it would be more appropriate to say they cannot care. They don’t have to cover their ears to not hear the snarky remarks from—in their eyes, hopelessly oblivious—outsiders. They’re effectively, almost literally, deaf to it.
Anyone who dares to point out they’ve fallen short of achieving some further away goal is not a goalpost mover to them, but small. So small and so tiny. Like, really, they can’t see them from the pedestals of the future they are creating.
From OpenAI’s point of view, they’ve sent Apollo 11 to the Moon; they’ve found the Higgs Boson—they’re just short of having solved P vs. NP or discovered the cure for death. I’m obviously exaggerating here, but that’s the point, right? I have to exaggerate what it may have felt to go from nothing to ChatGPT and GPT-4—in what, four? five years?—to try to grasp what those who lived through it felt. What OpenAI felt.
After this rant is over, I will go back to saying whatever I want about OpenAI because there are a lot of things that should be said. I feel, however, that I never truly understood them. I was critically missing just how deep their beliefs are; how entwined with their experiences and achievements; with their memories, emotions, and identities.
This doesn’t justify in any way why OpenAI, as a company, has made decisions that conflict with their previous commitments, but partially explains it.
Let’s take their choice to not release the complete GPT-2 initially, just as an example; talking about open-sourcing AI models in the abstract and in retrospect (as journalists and analysts do all the time) is quite different from having built something quite incredible—a first of its kind—whose abilities you can’t really test or whose limits you can’t find (as Radford experienced) and then having to decide if you release it into the wild.
It also explains why they’re so confident that superintelligence (or the end of history, whatever sounds better to you) is near.
I like to think of myself as someone who has an easiness to admit when I cross paths with an alien reality and to accept, once that’s the case, that I can never hope to fully get it. A reality that, whatever I try, I won’t ever manage to integrate with my own.
Well, then I accept it—I’m part of those “others” who can’t get it. What the OpenAI people felt at the time is beyond me. It is beyond us. And I’m sure they can't help but feel this way, too, but from the other side. And they like it. They know they’re the ones who know. They can’t not know. That’s a blessing, of the kind reserved for the “enlightened.”
It’s also a curse because they really live in a bubble. I don’t mean that in a derogatory way (open to interpretation, though). They’re—if this whole thesis and my interpretation how what Levy wrote are correct—tragically detached from the rest of the world.
Not just in their faith that a godlike superintelligence is attainable—which had to be there since the beginning—but as in being surrounded by a veil of untouchability that insulates them from anyone who has never experienced what they have—that is, everyone else.
They won’t stop now. They wanted this.