This Subtle but Fundamental Conflict Made OpenAI Break Up With Sam Altman
It’s not a clash between doomers and accelerationists — both Altman and Sutskever are pro-safety and AI-optimists. So, what happened?
Let’s try to further elucidate, with the available information, what’s happened to OpenAI, which resulted yesterday in the unexpected ousting of CEO Sam Altman and Chairman Greg Brockman.
I’ve seen people framing the internal clash as a conflict between AI doomers — those who think safety is the most important aspect of AI because a rogue superintelligence could kill us all if we are not careful — and techno-optimists and accelerationists — those who think the harm of not developing AI sooner is what we should be wary of. To them, every day that we don’t have powerful superintelligence to help us is a day of unnecessary suffering.
I think the reasons for the schism are subtler. The closest that I’ve seen someone come to my own interpretation of the events is this take by David Pfau:
No idea if the “doomer vs e/acc” framing will turn out to be what really happened, but if it is then I guess I'm on the side of... neither? I think the people rushing towards AGI are chasing a mirage, and the people trying to stop it are being frightened by shadows.
If it's more a matter of "research vs. product" though, and people thought Sam was just turning the place into The ChatGPT Company and wasn't supporting basic research? Well, that's a different matter.
I think, as Pfau does, that the real discrepancy, although related to “doomer vs e/acc,” is less sensationalist and less extreme — something that shouldn’t, at first glance, prompt such drastic measures.
If we accept that everyone at the company wants (wanted) to create AGI to benefit everyone (regardless of whether they could actually do that), which is a reasonable assumption, then the differences can be narrowed to a matter of methodology. From this, we can draw a line that separates two distinct groups.
One thinks AGI is better achieved by deploying products iteratively, raising money, and building more powerful computers — what we could call the industrial/business approach. The industrial approach group includes Altman and Brockman, who have championed the idea of iterative deployment in public the most. The other group, comprised of those who executed the coup against them, including Chief Scientist Ilya Sutskever, thinks AGI is better achieved by conducting deeper R&D on things like interpretability and alignment and devoting fewer resources to making consumer-oriented AI products — what we could call the academic approach.
The confusion comes from an interesting, perhaps coincidental double overlapping. First, between the industrial approach group and the techno-optimists who believe accelerating innovation is the only reasonable stance. Second, between the academic approach and the AI safety advocates. Although generally true, I don’t think the parallels are perfect. For instance, Yann LeCun is both a techno-optimist and centered on R&D above production and commercialization (even when he supports it).
If OpenAI’s break up is surprising is because Altman and Sutskever aren’t opposites, like doomers and accelerationists are. They are quite similar. But when the stakes are so high, “quite similar” is not enough. Let’s see why the conflict, although tiny, is likely the reason behind yesterday’s impactful events.
Everyone who has (or has had) a primary role at OpenAI is bullish on getting to artificial general intelligence (AGI) soon — they are all techno-optimists to some degree. Sutskever is definitely a believer in the benefits of superintelligence. Even Anthropic founders, who left OpenAI after a similar conflict believe it’s more probable that AGI becomes a net good for humanity. OpenAI people are also, as Altman has stated in public many times, advocates of the necessity of AI alignment and AI safety. There’s no apparent difference there, at least not one that’s immediately evident for people outside the company.
What they disagree on, which I believe is the main source of the schism, is the degree to which they think they should understand what they are creating before going on. Altman is content with making good products that people enjoy — he’s a businessman by heart — while he figures out the better road toward AGI. Sutskever — a science-and-research-first kind of guy — considers his priority finding out how to control a superintelligence by aligning its goals and values with ours first. Sutskever’s goal apparently required aligning first his company with his beliefs, which in retrospect makes a lot of sense. What was hardly predictable was the degree to which the tiniest discrepancy could prompt him to force that inner alignment at all costs.
If it has happened now instead of when they trained GPT-3 or GPT-4 is not because these differences didn’t exist but because their implications at that level of AI competence were not sufficiently critical to take action. Why now, then? The reason why Sutskever considers they’ve finally crossed the threshold of what’s an acceptable disagreement is likely related to the hidden meaning behind these words Altman said in a discussion at the APEC summit earlier this week — simply put, the board believes OpenAI is too close to AGI to proceed with the kind of nonchalance that characterizes Altman:
On a personal note, like four times now in the history of OpenAI — the most recent time was just in the last couple of weeks — I’ve gotten to be in the room when we pushed the veil of ignorance back and the frontier of discovery forward.
A couple of weeks ago, during the first Developer Day, Altman concluded with a similarly cryptic yet optimistic message:
What we launch today is going to look very quaint relative to what we're busy creating for you now.
While Sutskever appears to be inherently careful, Altman has always said he firmly believes the future will be “amazingly great,” despite AI’s risks. It’s not that Altman wasn’t ever worried OpenAI could go too far, too soon, but his optimistic character simply prevented him from being paralyzed by his fear.
It’s interesting to realize that such a tiny difference, at least in the eyes of the world, has resulted in an irreversible rupture in what’s probably the most important AI company at the moment, which in turn implies a fundamental fork for what’s probably the most important technology at the moment.
For most people, Altman and Sutskever are on the same page. It’s clear that the latter, at least, doesn’t agree with that. What’s unclear is who is right: Can you build AGI without a strong product roadmap? Is it safer to do it behind closed doors rather than following a process of iterative deployment — or more open still, as Meta is doing with the Llama family of models? Can anyone get to AGI without satisfying, at the same time, the economic requirements of the shareholders and investors?
Perhaps what Sutskever didn’t predict were the implications of ousting Altman and Brockman in such a dramatic and abrupt way; in public without previous notice to them or anyone else, using a harsh tone in the communications… Perhaps what he is caused is the exact opposite of what he wants: more competition by people willing to go faster toward AGI, like Altman himself, making people more bullish on open source AI as a consequence of a newfound distrust toward centralized AI providers, or forcing effective accelerationists to double down on their efforts against AI safety and doomsaying — even when attenuating risk is a defensible attitude in many cases.
Altman wanted “magic intelligence in the sky” but Sutskever seemingly doesn’t think an app store is the way there. Similar principles but different personalities. Sometimes those are not reconcilable. Sutskever may not be willing to risk entering into an arms race toward AGI. Perhaps he doesn’t like that entrepreneurs, and not scientists, are leading the efforts.
And I bet he doesn’t quite like Altman’s admiration for Oppenheimer or the fact that for him, the sweetness of discovery comes before a thorough analysis of the repercussions.
Who is right, time will tell.
You're really onto something here. Early reporting from Kara Swisher suggests that Sutskever found support on the board from Helen Toner, the director of CSET. She's authored numerous reports that make clear her concerns about how generative AI models may inflict harms as they grow more powerful -- whether a closet doomer or not is almost irrelevant, given the present dangers that concern her. Here's a quote from one of her reports:
As machine learning systems become more advanced, they will
likely be deployed in increasingly complex environments to carry
out increasingly complex tasks. This is where specification
problems may begin to bite. Without significant progress in
methods to convey intentions, machine learning systems will
continue to carry out their instructions exactly as given—obeying
the letter, not the spirit, of the rules their designer gives them.
To address the challenges posed by misspecification, more
machine learning research needs to account for worst case
scenarios and develop algorithms that more explicitly incorporate
human supervision or provide theoretical guarantees for the worst
case performance under a given specification.
Almost by definition, using commercial deployment to test generative AI systems as a product runs afoul of this imperative. The (much?) safer way to account for worst-case scenarios and develop theoretical guarantees -- perhaps better described as safeguards -- is to conduct research on them in advance. This may end up being the opening salvo in an epic philosophical war.
https://cset.georgetown.edu/wp-content/uploads/Key-Concepts-in-AI-Safety-Specification-in-Machine-Learning.pdf
I mean more on a human relationship/emotional level. Like maybe their philosophical alignments haven’t changed at all, but for some reason(s) there’s an inability to assume good intent that previously existed.