Forgive me.
First, because this is the second article I publish today and I don’t want to be annoying. Second, because this is the 143527 post I’ve written about OpenAI recently (slight exaggeration there but not so much). They’re entertaining aren’t they? At least for those of us covering what they do, not sure about readers who want this constant drama to cease.
Anyway, I didn’t write today what I want to share with you. Nor this week. No, I wrote it right after the Nov 2023 board coup. A year ago, when I was trying to understand what was going on with OpenAI, I decided to trace my analysis down to the core issue that has troubled the company since its inception. I think I found it. And it remains critical in light of recent developments, which I briefly outline below (I won’t comment on them. Others already have and info is still coming out):
Mira Murati, CTO, Bob McGrew, Chief Research Officer, and Barret Zoph, VP of research, left the company this week, surprising staff and direction alike.
OpenAI is in talks to raise $6.5 billion at $150 billion valuation but has reportedly been imposed two conditions: remove the cap that limits profits at 100x and pivot the company into a for-profit benefit corporation that will no longer be controlled by the non-profit entity (which will still exist and will own a minority stake in the for-profit).
Sam Altman is reportedly getting equity at the new valuation. Bloomberg reported 7% but newer sources deny there’s a concrete number or even that Altman will surely get any equity at all (what they say is that investors think it’s weird and even suspicious that the CEO and co-founder doesn’t have any equity, which is true).
The Wall Street Journal has published more details of what’s happened inside OpenAI since the failed board coup last year. My takeaway: Ilya Sutskever, Jan Leike, John Schulman, Mira Murati, and others all left to some degree due to the conflict between prioritizing products + profits vs research + safety.
Below is my reading from one year ago of why the board coup happened (originally published on Nov 22, 2023), which is, save for a few idiosyncratic details of the moment, the same reason why OpenAI is going through this now. (Striking, given how much the tech has advanced and the market has changed.)
I’ve skipped the intro and the conclusion which are completely focused on the coup. Regarding the rest, I haven’t changed a word. Some things—details like dates and direct references to those events—you can safely ignore. Others, I couldn’t have possibly predicted a year ago. But having re-read it in full today I believe it’s worth reading to understand the burdens OpenAI has been carrying since it was born—that still carries to this day.
What makes OpenAI truly special
When I take a bird’s-eye view to look at the events of the past five days from a fresh perspective I see the exact same AI startup I’ve been following for years but wide open (about time), as if it had received a clean-cut right at its core so that its innards, ugly and raw as all are, could no longer hide.
What this means is simply that we’ve had, for a weekend, unusual access to the usual power struggles and behind-the-scenes shenanigans that are commonplace in world-class young and old companies (also governments and virtually any place where power plays a big role). These things happen all the time, we just simply don’t have first-hand or even second-hand contact with them. OpenAI is a very public company (despite being private and increasingly closed in its scientific and engineering efforts). In a general sense, it’s a celebrity startup: we know who they are, what they do, and why they do it. The apparent relevance of this weekend’s events reflects that more than ever before.
But OpenAI doesn’t enjoy (or suffer, depending on how you look at it) this kind of inordinate publicity for arbitrary reasons, but because of two concrete realities; one that the company’s executive established at its inception and another that’s the result of the great work they’ve done over the years.
First, OpenAI claims to be working on what will presumably be the most important technology humanity has ever created: AGI (artificial general intelligence), “a highly autonomous system that outperforms humans at most economically valuable work.” AGI, however attained, promises to redefine the rules of society.
Second, although whether OpenAI achieves its goal or not is a different question, the truth is they’ve already made important strides toward that goal (not everyone agrees), which makes it plausibly achievable and sparks widespread interest. In particular, GPT-2 and GPT-3 a few years ago and more recently GPT-4 and ChatGPT. People now have directly felt what AI can do and have surely imagined what it will potentially do. That’s unprecedented in the history of AI — for the first time, reality competes with popular science fiction as people’s mental picture of the future.
The company’s ambitions and successes put it in the spotlight. The grandiose purpose it chases after and the trail of satisfactory achievements it leaves behind have earned it the public attention and scrutiny it gets. We have treated it singularly over the last few days only because OpenAI branded itself as a singular company and because, to a respectable degree, they’ve proved to be one.
There’s really not much unusual about what just happened — what’s unusual is what OpenAI thinks of OpenAI.
The original sin of OpenAI’s founders
Some of the interest OpenAI attracts is intended solely to criticize it. But, is the pursuit of AGI a problem in itself or worthy of criticism? I don’t think so — building it is much harder than criticizing the efforts. Hype is not ideal but anti-hype can often be worse. This crisis has accentuated the kind of attacks the company has been receiving for years (I’m guilty, too). I think all the events between the firing and the rehiring of Altman would be unimportant for the press and public opinion and literally inconsequential for the world except for one very important thing: OpenAI’s original promise wasn’t just to create AGI but to make sure it “benefits all of humanity.”
Making AGI universally beneficial was a constraint no one forced them to accept. They did it themselves out of pure conviction that AGI should be safe and beneficial for humanity or it shouldn’t be at all. Laudable, with no buts, except for one: it has proven so hard — even when we are still so far from AGI — to meet this requirement that it has become a profound hindrance to OpenAI’s efforts. I don’t think anyone outside the company truly believed that ideal was going to materialize. The OpenAI executive was naive. I don’t give credit to the idea that they believed setting such a high standard would grant them more opportunities or would help them attract better talent, and so on. It was a mistake born from the purest, sincerest idealism. The original sin of OpenAI’s founders.
That original sin forced the founders to inadvertently chain other errors, one after the other, until today. It made them start OpenAI as a non-profit, setting up the now infamous board. When they realized they’d need the kind of money that required help from Big Tech, it turned inviable to stay a non-profit. They had to publicly backtrack — receiving deserved criticism — to a capped-profit structure and partner with Microsoft, risking the mission they had set up (later we learned that it was Elon Musk’s withdrawal that forced Altman to seek investment from Satya Nadella). Capital had finally closed its grip on the otherwise wholesome endeavor.
OpenAI also promised to hold AI safety and AI alignment as the highest priority in its hierarchy of values. If OpenAI had to be destroyed for safety, the board would do that (the board’s critics say this is nonsense, but think about this exact scenario with nuclear weapons, for instance. It’s much clearer Helen Toner is right, at least in principle). If someone, including Altman, diverged from the safest path, the board could fire him. If investors tried to pressure the company to follow a high-growth, low-safety route, the board would cut ties. That was important. At least that’s what Altman said right before he was fired.
The world evaluates you to the standards you set for yourself
The founders all agreed this was the essence of OpenAI — not just an AGI company but a safety-first AGI company, counterpart to DeepMind, which was effectively captured by Google at the time. Many people, inside and outside the company, were enthusiastic about it because of this strong set of immovable principles. But principles live in the realm of theory. As the saying goes, “no plan survives contact with the enemy.” The real world, a ubiquitous enemy for all idealists, has a knack for disrupting these kinds of abstractly flawless plans.
Perhaps, over time, due to the amazing and unexpected success of ChatGPT or maybe a prior shift of beliefs for some undisclosed reason, the startup suffered from internal discrepancies that never really emerged when it wasn’t yet so successful and well-known. Perhaps those discrepancies always existed as seeds waiting for the circumstances to water them; Altman and Sutskever, for instance, have radically different backgrounds, which marks a subtle but fundamental divergence in their approaches to AGI. Whatever the case, it seems that as time went by and the material reality on which they were building their predictions changed, those formerly tiny disagreements grew too much to be acceptable.
But that happens, right? Everywhere, all the time. If it’s been newsworthy these past days is only because OpenAI was our hope — the eventual triumph born out of a seemingly altruistic premise that proved harder and harder as they got closer to AGI. The founders set up such a high bar to clear that it was as if they were running uphill with their feet tied together. Had OpenAI begun as a pure for-profit — as most AI companies are and no one says a thing — would it have faced this kind of public backlash for this weekend’s events? I don’t think so.
What makes OpenAI special — and especially attackable — is that they held themselves to a higher standard than most and we've been evaluating them against that standard which, by definition, was impossible to meet.
OpenAI should abandon its divine aura
What’s wrong with OpenAI principles? A technology that’s beneficial for all of humanity can mean two things. First, what I believe OpenAI always meant is that AGI would benefit everybody from the onset. Perhaps not everyone equally (which is impossible even in theory), but a kind of tech designed to be not just a net good but a gross good for the world (I believe they believed this was possible. I don’t believe myself it’s possible, though).
Second, the easier interpretation is that with “beneficial for all” OpenAI actually meant “beneficial for all at some unpredictable and arbitrarily distant point in the future.” That’s not only harder to evaluate but trivial. To some degree, all technologies are purely beneficial if we wait a sufficiently large amount of time. What would we do without farming or writing or fire or the wheel? AGI will surely meet this same criterion. But that’s a semantic trick.
To avoid this kind of confusion and prevent future backlash, what OpenAI should do is come down from the pedestal on which it places itself and become, like all the other companies, a common enterprise, even if it remains in pursuit of an uncommon goal. So here’s some unsolicited advice for OpenAI.
Drop the “a non-profit owns and controls us” discourse. We know that money rules the company, as it couldn’t be otherwise in a capitalist society. That’s fine. What is objectionable is the presumptuousness with which you make it look like you’re on a moral high ground from which you then can build an enticing narrative to sell to the world. The weird capped-profit structure isn’t really working either and likewise, that’s not bad if you accept it.
Drop the “for all of humanity” bit that gives you an aura of being our savior (never ends well). There are no universal human values. Humanity is in constant conflict with itself, just look around. We live realities so alien to one another that not even within each of us there’s agreement all the time. I mean, you are building its AI systems by scraping human data and using vulnerable people to train them, only for the resulting products to erase millions of jobs. Fine, the world is deeply imperfect and those are instances of that imperfection, but come on, just don’t try to make it seem like AI will solve every social and political problem — including the ones you’re worsening — with a tech-shaped panacea. You can use whatever means you want to achieve your ends but don’t proselytize that your ends justify the means you choose.
Finally, drop the “we are the single most important group of people currently living” vibes that put you in a sort of enlightened despotic framing. It gives you the aura of a religious cult. The bubble you live in only makes the rest of us unable to relate to you. Perhaps you don’t care, but that attitude defines the relationship you have with the world. Do people even want what you claim to be creating? Do you even care? Those questions are important.
If you do that you will be fine. As fine as you should have been all these years, and especially this weekend. No one will be able to make any criticism of substance except perhaps to provide input about the methodology. Things only turn out bad or good against the standards we set for them. Google fired its top ethics AI researchers a couple of years ago. Microsoft disbanded its responsible AI team earlier this year. Meta did the same thing quietly over the weekend. Yet is it you who are receiving the most criticism just because you seem to care the most about safety but Altman wants the company to grow faster? Doesn’t make sense.
It’s in the failing expectations that criticism emerges. Don’t let it. You are an imperfect company, like all are, with things to improve on the inside and outside. You are playing the rules of an imperfect game, but you are doing great work. Not everyone values that part, but it’s worthy of respect and admiration.
I think the "original sin" is the decision to steal writing and artwork. No permission and no compensation - just take what they want! Somehow no pennies fall to those whose real intelligence fuels this monster. The treatment of workers and horrific environmental costs add to the damage. No thanks!
Thanks for the article. I beg to differ.
Awareness that a company doesn’t drive in isolation of its environment and should be responsible towards its stakeholders is not a sin but actually what modern companies shall do.
They could have focused on “not harmfull” vs “beneficial”, nobody would have blame them.
Similar to Google “don’t be evil” if you get my drift.
The problem is not what you set for yourself that makes you vulnerable but instead when do something objectively wrong. Microsoft had its year of hate until the change of management.
It’s less about the vision/mission but the lack of alignment of people on it.