Making Sense of OpenAI, Sam Altman and the Non-profit Board on Their Hardest Days
The things that went wrong this weekend have been wrong since the very beginning — the rest is fine and always has been
This is my fifth and final article about the recent OpenAI saga (you can read the other four here, here, here, and here). It is a calmer reflection on the significance of the events from Friday (the firing of CEO Sam Altman) to Tuesday (Altman’s return as CEO). It is a defense of OpenAI and Altman but not for the reasons you may think. It is an attempt to understand the board’s actions and why a discrepancy between them and Altman ever existed in the first place. It is unsolicited advice highlighting the aspects I believe OpenAI should improve going forward which, surprisingly, have not much to do with the recent events and all to do with what OpenAI has always been and the mistakes the executive made at the very beginning. OpenAI’s original sin.
The crisis is over. The drama is over. Things are back to normal. Altman is returning. Greg Brockman is returning. The non-profit board has been renewed — both Altman and Brockman are out, as well as three out of four members who executed the coup; Ilya Sutskever, Helen Toner, and Sasha McCauley. Adam D’Angelo stays. The board has added two new members, Bret Taylor and Larry Summers (whether they are good picks I leave you to decide). It will grow over time, up to nine seats, including at least one for Microsoft.
In most aspects (especially business and technology) it’s as if the events of the past five days didn’t happen. OpenAI employees have been shipping through the storm and ChatGPT is up; users and customers can breathe. But there are some loose ends that will require further explanation. For instance, the former board only accepted Altman’s return after he and Brockman gave up their board seats. More importantly, they ensured a more formal investigation would take place to elucidate the exact reasons that prompted their dismissal of Altman in the first place.
So yeah, things are back to normal for employees, customers, and investors, but not exactly for OpenAI and the mission. Not because the company or its values have changed, but because the circumstances have pressed them really hard, squishing out the juice of small disagreements in the form of invisible micro-cracks that were always there, turning them into dirty laundry for the world to see.
What makes OpenAI truly special
When I take a bird’s-eye view to look at the events of the past five days from a fresh perspective I see the exact same AI startup I’ve been following for years but wide open (about time), as if it had received a clean-cut right at its core so that its innards, ugly and raw as all are, could no longer hide.
What this means is simply that we’ve had, for a weekend, unusual access to the usual power struggles and behind-the-scenes shenanigans that are commonplace in world-class young and old companies (also governments and virtually any place where power plays a big role). These things happen all the time, we just simply don’t have first-hand or even second-hand contact with them. OpenAI is a very public company (despite being private and increasingly closed in its scientific and engineering efforts). In a general sense, it’s a celebrity startup: we know who they are, what they do, and why they do it. The apparent relevance of this weekend’s events reflects that more than ever before.
But OpenAI doesn’t enjoy (or suffer, depending on how you look at it) this kind of inordinate publicity for arbitrary reasons, but because of two concrete realities; one that the company’s executive established at its inception and another that’s the result of the great work they’ve done over the years.
First, OpenAI claims to be working on what will presumably be the most important technology humanity has ever created: AGI (artificial general intelligence), “a highly autonomous system that outperforms humans at most economically valuable work.” AGI, however attained, promises to redefine the rules of society.
Second, although whether OpenAI achieves its goal or not is a different question, the truth is they’ve already made important strides toward that goal (not everyone agrees), which makes it plausibly achievable and sparks widespread interest. In particular, GPT-2 and GPT-3 a few years ago and more recently GPT-4 and ChatGPT. People now have directly felt what AI can do and have surely imagined what it will potentially do. That’s unprecedented in the history of AI — for the first time, reality competes with popular science fiction as people’s mental picture of the future.
The company’s ambitions and successes put it in the spotlight. The grandiose purpose it chases after and the trail of satisfactory achievements it leaves behind have earned it the public attention and scrutiny it gets. We have treated it singularly over the last few days only because OpenAI branded itself as a singular company and because, to a respectable degree, they’ve proved to be one.
There’s really not much unusual about what just happened — what’s unusual is what OpenAI thinks of OpenAI.
The original sin of OpenAI’s founders
Some of the interest OpenAI attracts is intended solely to criticize it. But, is the pursuit of AGI a problem in itself or worthy of criticism? I don’t think so — building it is much harder than criticizing the efforts. Hype is not ideal but anti-hype can often be worse. This crisis has accentuated the kind of attacks the company has been receiving for years (I’m guilty, too). I think all the events between the firing and the rehiring of Altman would be unimportant for the press and public opinion and literally inconsequential for the world except for one very important thing: OpenAI’s original promise wasn’t just to create AGI but to make sure it “benefits all of humanity.”
Making AGI universally beneficial was a constraint no one forced them to accept. They did it themselves out of pure conviction that AGI should be safe and beneficial for humanity or it shouldn’t be at all. Laudable, with no buts, except for one: it has proven so hard — even when we are still so far from AGI — to meet this requirement that it has become a profound hindrance to OpenAI’s efforts. I don’t think anyone outside the company truly believed that ideal was going to materialize. The OpenAI executive was naive. I don’t give credit to the idea that they believed setting such a high standard would grant them more opportunities or would help them attract better talent, and so on. It was a mistake born from the purest, sincerest idealism. The original sin of OpenAI’s founders.
That original sin forced the founders to inadvertently chain other errors, one after the other, until today. It made them start OpenAI as a non-profit, setting up the now infamous board. When they realized they’d need the kind of money that required help from Big Tech, it turned inviable to stay a non-profit. They had to publicly backtrack — receiving deserved criticism — to a capped-profit structure and partner with Microsoft, risking the mission they had set up (later we learned that it was Elon Musk’s withdrawal that forced Altman to seek investment from Satya Nadella). Capital had finally closed its grip on the otherwise wholesome endeavor.
OpenAI also promised to hold AI safety and AI alignment as the highest priority in its hierarchy of values. If OpenAI had to be destroyed for safety, the board would do that (the board’s critics say this is nonsense, but think about this exact scenario with nuclear weapons, for instance. It’s much clearer Helen Toner is right, at least in principle). If someone, including Altman, diverged from the safest path, the board could fire him. If investors tried to pressure the company to follow a high-growth, low-safety route, the board would cut ties. That was important. At least that’s what Altman said right before he was fired.
The world evaluates you to the standards you set for yourself
The founders all agreed this was the essence of OpenAI — not just an AGI company but a safety-first AGI company, counterpart to DeepMind, which was effectively captured by Google at the time. Many people, inside and outside the company, were enthusiastic about it because of this strong set of immovable principles. But principles live in the realm of theory. As the saying goes, “no plan survives contact with the enemy.” The real world, a ubiquitous enemy for all idealists, has a knack for disrupting these kinds of abstractly flawless plans.
Perhaps, over time, due to the amazing and unexpected success of ChatGPT or maybe a prior shift of beliefs for some undisclosed reason, the startup suffered from internal discrepancies that never really emerged when it wasn’t yet so successful and well-known. Perhaps those discrepancies always existed as seeds waiting for the circumstances to water them; Altman and Sutskever, for instance, have radically different backgrounds, which marks a subtle but fundamental divergence in their approaches to AGI. Whatever the case, it seems that as time went by and the material reality on which they were building their predictions changed, those formerly tiny disagreements grew too much to be acceptable.
But that happens, right? Everywhere, all the time. If it’s been newsworthy these past days is only because OpenAI was our hope — the eventual triumph born out of a seemingly altruistic premise that proved harder and harder as they got closer to AGI. The founders set up such a high bar to clear that it was as if they were running uphill with their feet tied together. Had OpenAI begun as a pure for-profit — as most AI companies are and no one says a thing — would it have faced this kind of public backlash for this weekend’s events? I don’t think so.
What makes OpenAI special — and especially attackable — is that they held themselves to a higher standard than most and we've been evaluating them against that standard which, by definition, was impossible to meet.
OpenAI should abandon its divine aura
What’s wrong with OpenAI principles? A technology that’s beneficial for all of humanity can mean two things. First, what I believe OpenAI always meant is that AGI would benefit everybody from the onset. Perhaps not everyone equally (which is impossible even in theory), but a kind of tech designed to be not just a net good but a gross good for the world (I believe they believed this was possible. I don’t believe myself it’s possible, though).
Second, the easier interpretation is that with “beneficial for all” OpenAI actually meant “beneficial for all at some unpredictable and arbitrarily distant point in the future.” That’s not only harder to evaluate but trivial. To some degree, all technologies are purely beneficial if we wait a sufficiently large amount of time. What would we do without farming or writing or fire or the wheel? AGI will surely meet this same criterion. But that’s a semantic trick.
To avoid this kind of confusion and prevent future backlash, what OpenAI should do is come down from the pedestal on which it places itself and become, like all the other companies, a common enterprise, even if it remains in pursuit of an uncommon goal. So here’s some unsolicited advice for OpenAI.
Drop the “a non-profit owns and controls us” discourse. We know that money rules the company, as it couldn’t be otherwise in a capitalist society. That’s fine. What is objectionable is the presumptuousness with which you make it look like you’re on a moral high ground from which you then can build an enticing narrative to sell to the world. The weird capped-profit structure isn’t really working either and likewise, that’s not bad if you accept it.
Drop the “for all of humanity” bit that gives you an aura of being our savior (never ends well). There are no universal human values. Humanity is in constant conflict with itself, just look around. We live realities so alien to one another that not even within each of us there’s agreement all the time. I mean, you are building its AI systems by scraping human data and using vulnerable people to train them, only for the resulting products to erase millions of jobs. Fine, the world is deeply imperfect and those are instances of that imperfection, but come on, just don’t try to make it seem like AI will solve every social and political problem — including the ones you’re worsening — with a tech-shaped panacea. You can use whatever means you want to achieve your ends but don’t proselytize that your ends justify the means you choose.
Finally, drop the “we are the single most important group of people currently living” vibes that put you in a sort of enlightened despotic framing. It gives you the aura of a religious cult. The bubble you live in only makes the rest of us unable to relate to you. Perhaps you don’t care, but that attitude defines the relationship you have with the world. Do people even want what you claim to be creating? Do you even care? Those questions are important.
If you do that you will be fine. As fine as you should have been all these years, and especially this weekend. No one will be able to make any criticism of substance except perhaps to provide input about the methodology. Things only turn out bad or good against the standards we set for them. Google fired its top ethics AI researchers a couple of years ago. Microsoft disbanded its responsible AI team earlier this year. Meta did the same thing quietly over the weekend. Yet is it you who are receiving the most criticism just because you seem to care the most about safety but Altman wants the company to grow faster? Doesn’t make sense.
It’s in the failing expectations that criticism emerges. Don’t let it. You are an imperfect company, like all are, with things to improve on the inside and outside. You are playing the rules of an imperfect game, but you are doing great work. Not everyone values that part, but it’s worthy of respect and admiration.
The world needs OpenAI to be more candid
AI is useful. AGI will be. They could also be potentially harmful, like all other technologies. It’s good to have different people working on that and the more ethically they do it, the better (this is definitely something OpenAI and the others could improve). Neither OpenAI, Altman nor the board did anything wrong this weekend (at least, nothing wronger than usual) as far as I can tell from the available information (the board didn’t provide any examples of malfeasance and said explicitly that wasn’t the reason for the ouster).
Do the particular psychologies of the leaders influence the company, the employees’ mindsets, and the perception the world has of it all? Sure. Is Altman unusually ambitious? Unusually persuasive? Unusually power-hungry? Good for him, I guess. He will achieve exactly what he wants as he seems to have done in the last few years. Will he choose to prioritize growth and value over safety? Fine. Will he pursue other projects (e.g. AI hardware, AI devices, energy, etc.)? Fine. But let’s not forget he could have chosen other life paths — some perhaps more profitable and surely less helpful for the world at large — that would suit him just as well. Are Altman’s decisions the best for a company that is presumably building a technology that will affect us all? I don’t know.
The exact same generous reasoning applies to the board members who executed the coup. They had their reasons to do so, which is clear from the fact that they only conceded rehiring Altman after he gave up his board seat and accepted the investigation that’s yet to take place. Without any financial incentive, the only explanation is they did it on behalf of the company’s impossible mission. Was the board's decision an attempt to gain power or is it better framed as an attempt to limit Altman’s power? I don’t know.
Both Altman and the board acted in self-interest, as we all do all the time. Luckily, it’s in their interest to carry on in a way that benefits the world broadly. Trying to benefit literally everyone is what prompted a clash that ended up benefiting no one. It’s good for OpenAI to tone down its ambitions (or at least their scope) while at the same time strengthening its candor, probably the word of the weekend. More transparency, openness, clearer incentives, and especially more honesty would certainly be a more consistently candid approach to communicating with the world. That’s what I want and what they should have offered since the very beginning. No one would dare criticize that regardless of profit motivations or a goal slightly less crazy than making AGI the engine of a post-scarcity world of abundant wealth for everyone.
Some people have sided with the board (I predict more will after the investigation is conducted if we ever get to know what really happened). Others sided with Altman and Brockman (most OpenAI employees did). It will depend mainly on what you already believe is better for the world. What has happened these days and what Altman has done (the things we know and the things we don’t), are probably not strong enough reasons to change our views about OpenAI — the company; the business — with respect to Friday.
What should make us change how we see OpenAI is what it really is — what it has always been — which is not at all a consequence of this crisis, but whose inherent incoherences and implausible aspirations have definitely been brought to light by it.
To conclude with reference to the loose ends that haven’t yet been tied up, I predict they will be solved promptly and quietly. They won’t unveil new information but stuff that has already going on for months, probably years. It’s new only for outsiders. The board’s decision was merely an error-correction mechanism that worked just fine according to its mission.
I hope the OpenAI people will course correct if necessary and reflect on what they want to be and what they want the world to think of them.
Alberto - I found your piece insightful, useful, and not "more of the same" on this topic. Hats off.
Love your newsletter. No arguments with your facts, interpretations of them, or the values that underwrite them. I see your work as smart, thought-provoking and important.
I just want to grieve for a second that it is considered almost sort of “laudably realistic” that aiming for benefiting all humans is a bad idea. A naive “sin.”
Even if it’s impossible in practice, even if it will inspire others to criticize us more harshly, even if it means someone else will win in our capitalist incentive structures, even if our attempt at altruism succeeds at nothing but dragging us a little further away from cynicism… it’s got to be better to fail attempting to be of universal service than to succeed at profitably destroying the world, right?