It’s launch day.
After years on the project, OpenAI’s ambitious quest is coming to an end. GPT-7 is ready for prime time. It’s a multimodal embodied general intelligence capable of causal reasoning, deep thinking, and generalization endowed with endless memory, sharp accuracy, and light-speed processing power. OpenAI’s execs are making their dream a reality. They’re going to revolutionize the world for the better. Proud of the achievement they publish a clarifying announcement:
“We’ve run a few internal tests and our conclusion is clear: AGI is here.
It’s competitive out there so we're not going to disclose how this works. Or how it was designed, built, trained, taught, and deployed. It’s undoubtedly a superintelligence, but we can’t provide the means to check that claim either (or test it against your benchmarks).
You can’t deny openness is problematic: We can’t let bad actors—who may not want to benefit all of humanity—replicate our work. We’re concerned about downstream harms, just like you, but we made it anyway. We’re keeping it under chains, though (unless you pay).
It might be unsafe but don’t worry, we’ll support a regulation that ensures prudent deployments of similar technology. Not for us, of course, we have already put in guardrails—just have some faith that we’re doing it right.
We know our product isn't perfect—but neither are you.”
This (maybe excessively) sarcastic story was completely hallucinated. You may have noticed because OpenAI wouldn’t build an embodied AGI (you know, “scale is all you need” or, the new one, “gradient descent can do it”). It wasn’t ChatGPT who made this up, though, but me—and who could blame me for making a few factual mistakes that may mischaracterize OpenAI?
The underlying narrative isn’t all that fictitious, though. If we project OpenAI’s recent behavior into the future, the scenario I’ve caricatured is a reasonable possibility. The fake quoted announcement is scarily close to what has actually happened with GPT-4. It’s not too crazy to imagine OpenAI facing the 7th GPT iteration with the same mindset. OpenAI has changed since its founding, I concede, but that's no consolation either: I struggle to find one area where it has been for the better.
OpenAI’s new face
In a series of interviews, OpenAI's leadership trio of CEO Sam Altman, President Greg Brockman, and Chief Scientist Ilya Sutskever made it quite clear that OpenAI's original charter is dead. GPT-4 is the final nail in that coffin. The implications of OpenAI's new face (or, better, mask) will be broad and deep.
On March 15th, Sutskever told The Verge that they (OpenAI) “were wrong” about open-source AI. “[I]n a few years it’s going to be completely obvious to everyone that open-sourcing AI is just not wise,” he said. But when asked about the reasons that moved OpenAI to keep GPT-4’s specs in secret, he ranked competition as the main one: “It’s competitive out there. GPT-4 is not easy to develop … And there are many many companies who want to do the same thing.”
This complete shift from open-source to closed-source and full opacity is best reflected by this paragraph on GPT-4’s technical report (not a research paper):
“Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.”
If people (AI researchers in particular) are disappointed and angry at OpenAI (as I write this I can’t help but realize that the name is indeed a hilarious joke) it isn’t because of the change itself—no one would dispute a private company’s decision to choose profits (i.e., survival) over altruistic cooperation.
The reason for the widespread discontent is that OpenAI was special. They promised the world they were different: A non-profit AI lab with a strong focus on open source and untethered from the self-interested clutches of shareholders was unheard of and the main reason the startup initially amassed so many supporters.
Even Elon Musk, who initially funded OpenAI—and who isn’t someone who can be accused of being pro-cooperation—had this to say about them:
There may be other behind-the-scenes conflicts between Musk and OpenAI, but that doesn’t make him wrong. To illustrate just how much OpenAI has changed for the worse, here’s the first paragraph of the first blog post they published in 2015, the year of its foundation (emphasis mine):
“OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.”
Quite a different prospect from what it’s turned into. Again, it’s licit to change over the years and turn into something else—even if it looks nothing like the original promise and even if the new version is despicable in the eyes of the world—but I find it ludicrous that they still now act as if they have the moral high ground.
On February 24th this year, Sam Altman published a blog post entitled “Planning for AGI and beyond.” The first paragraph says: “Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity [emphasis mine].” The hypocrisy is strong with this one. How can they try to sell the same exact message after an eight-year journey of policy-changing that has taken them to the other end of the spectrum: from open-source to closed-source, from non-profit to for-profit, from no corporate to fully corporate?
But OpenAI can’t hide the truth anymore. People are starting to see through the masks they put on to be seen as the good guys.