The Full Weight of the Law Looms Over OpenAI
The five forces that could take down OpenAI—part 2
Important reminder: Today is the last day you can redeem the Summer Special Offer and get exclusive access to TAB at a 30% discount. Get it below:
Tech-focused regulation is slow. Much slower than tech progress. But it always, eventually, catches up—it’s catching up.
There’s a reason why governments and policymakers are so interested in talking with the big names in the AI industry: they don’t want to make the same mistakes they made with social media; they don’t want to let them self-regulate. They’ve learned their lesson and, combined with an unusually pervasive call from both academia and companies to regulate AI, they feel the urgency.
OpenAI is, as it couldn’t be otherwise, in all those conversations—with Biden, with Sunak, in the Senate, and with Schumer just today (behind closed doors). As the center of the generative AI boom, the young startup enjoys the privilege of being consulted on these issues just as much as the big tech titans (as interested players, perhaps it’s a mistake to give them so much prominence, though).
OpenAI’s CEO, Sam Altman, has publicly expressed his concerns on issues that must be handled, like the potential effects of targeted disinformation on the upcoming US election, job losses that could upend sectors historically protected from innovation, or AI-enhanced cybercrime. For that, he should be thanked. Not because he’s the first to note those problems but because his voice is louder than most and he’s repeatedly shown a sensible stance.
More remarkable is his eagerness to embrace regulation and his willingness to explain complex topics to non-AI-savvy government officials—both stand in stark contrast to the way other tech CEOs have dealt with these questions in the past. “Refreshing” was the word Senator Richard Blumenthal used to describe his approach. Altman knew regulation would catch up with progress and he’d be eventually forced to choose between opposing it in an unnecessary flaunting of the power of the AI industry or accepting it with a smile. Smart guy, smart move.
But of course, being somewhat inclined to accept regulation—and being clever enough to show predisposition and even enthusiasm toward it—doesn’t prevent him from picking fights that suit his company. He isn’t as keen to talk about the human factory that silently powers generative AI or the copyright infringements that ChatGPT allegedly engages in just by existing.
His reluctance to touch on delicate topics shouldn’t be surprising, though, because one thing is certain: he needs OpenAI to survive. OpenAI has to have a free hand to keep doing exactly what it’s doing and for that, Altman might be reasonably willing to sacrifice his honesty to some degree. He wants to keep those practices going—preferably within the legality—and one way to do that is to divert regulation efforts to other aspects he claims are more urgent.
That’s the game Altman is playing to protect what matters most to him and his company. And he’s doing it masterfully by going above and beyond what’s expected of someone in his position. So far he's impressed almost everyone. Almost.