The OpenAI Drama Confirms an Open Secret That Finally Defeats the AGI Dream
Can Sam Altman, honestly and transparently, answer the biggest question he will face now?
The events that have unfolded over the weekend around OpenAI’s executive and the non-profit board — in particular, Sam Altman’s firing followed by its yet-to-be-confirmed return as pressure from employees and investors increases — would provide undeniable evidence of a devastating truth for people who advocate for AGI (artificial general intelligence) to be beneficial for all of humanity.
(Edit: Altman isn’t coming back as CEO and instead will be joining Microsoft together with Greg Brockman and probably many other OpenAI employees. The arguments below are valid anyway to the degree they can be under the current circumstances.)
It doesn’t matter how much you try to play outside the rules that constrain the world you live in, you can’t. In a profit-driven capitalistic society, regardless of its many virtues, you will always encounter one unavoidable peril that takes this form when extrapolated to AI: However AGI is achieved and whatever it is capable of, it will always be subjected to the interests of the people who pay for it.
This won’t come off as a surprise for many people — especially the tech industry’s critics. It’s certainly not a surprise for me. But so far, we had no concrete evidence that this applied to OpenAI (and, as Jeremy Khan suggests, by extension also to other frontier AI labs like DeepMind and Anthropic). Like everyone else, OpenAI plays under the ruling of capital but its founders ideated a convoluted and rather ingenious structure to protect any efforts toward AGI from being captured or compromised by stakeholders seeking financial return.
The basic idea was to submit a for-profit subsidiary, charged with raising and distributing the many billions of dollars (in the form of computing power and engineering talent) required to achieve AGI, to a non-profit board (the now infamous board) that would own and control the whole thing — with the added implicit authority to fire Altman, the CEO, as he himself told Emily Chang earlier this year — in behalf of the company’s mission, i.e., “safe AGI that is broadly beneficial.”
The non-profit board would serve “humanity, not OpenAI investors,” and would ensure that the for-profit’s actions forwarded the ultimate mission, regardless of how it managed the external investments and the profits from product revenue. Importantly, the board would also decide when AGI (by their definition: “a highly autonomous system that outperforms humans at most economically valuable work”) is achieved, excluding it and all superior tech “from IP licenses and other commercial terms with Microsoft.”
If this structure looks unusual is because it is. Although it embodies a laudable effort to fulfill the intuitively impossible mission of creating a universally beneficial technology in a capital-driven world, there was an evident internal conflict: What would happen if the for-profit branch considers that raising an arbitrarily large amount of money is paramount to achieving AGI but the non-profit board considers that approach an increasingly unsustainable risk against the interests of humanity as OpenAI would become too attached to powerful players seeking financial returns?
Well, now we know.
Keep reading with a 7-day free trial
Subscribe to The Algorithmic Bridge to keep reading this post and get 7 days of free access to the full post archives.