It was not the potential benefits of nuclear bombs that led the US to develop the first one in 1945, which was soon later replicated by other superpowers. It was the fear that the nazis could do it first. A reasonable fear (up to some point), but a fear nonetheless.
It’s not the potential benefits of AI that have led the US and the UK to, respectively, release the first Executive Order (EO) on AI and lead the AI Safety Summit this week. The one sentence that best describes both efforts would be: “They have bought the fear.” The fear that the long-term risks of AI materialize — including that it goes rogue and kills us all.
The UK AI Safety Summit was centered around the long-term risks of AI from the very beginning — the Prime Minister had been giving away hints he had bought the AI existential risk narrative for a few months now, so it’s really no surprise. This isn’t good or bad in itself, just profoundly illuminating of what matters to the UK government.
The EO, although much broader (touches on virtually every topic) also bought an idea that industry leader, Sam Altman, proposed to the Senate in May: Setting a threshold to control AI model development as a function of the level of capabilities (instead of regulating at the application level, which wouldn’t stifle open innovation), using as a temporary proxy for “capabilities” the amount of FLOPS used to train a model (which isn’t the best approach if the goal is to minimize all kinds of risks). Companies that develop models above a threshold (10^26 FLOPS; above GPT-4, but probably not GPT-5 or Gemini) will have to report “safety test results” to the government.
This may not sound that bad but writing reports is definitely a hurdle for smaller players yet a negligible delay for larger players. Imagine if the government had regulated the size of software programs or the number of transistors that a device could have back in the 80s or 90s; in a matter of months, only those who could pay the “risk tax” could have kept innovating. The world would look radically different: much more centralized, and controlled by a few players (yes, even more than it is today).
Sam Altman surely knows this applies to AI — he is the one who hypothesized Moore’s Law for Everything, which presumably includes the large language models his company builds.
Don’t get me wrong, this is not a judgment call on the regulatory approaches of the US and UK governments. There are aspects that deserve praise. For instance, the EO covers misinformation and disinformation, bias and discrimination, and cybercrime. But those risks are conflated with longer-term risks like autonomous agents, bioweapons, and AI as an existential threat. At the same time, transparency on training data or specific copyright regulations, worries that matter a lot in the current landscape, aren’t mentioned at all. As Karen Hao and Matteo Wong write for The Atlantic, “President Biden’s big swing on AI is as impressive and confusing as the technology itself.”
I don’t claim to know which kinds of regulations could improve this (I literally don’t have a clue). But that doesn’t stop me from realizing it has profound fear-driven deficiencies. I also don’t know what the future of AI would be like if they managed their fear better, but we will soon know what it is like when they don’t. So yeah, fear is winning. And it’s leading to haste, confusion, and a one-size-fits-all kind of AI regulation.
What strikes me as unexpected is that fear wins independently of the direction in which it moves us. We can’t equate regulation with fear and no regulation with optimism and bravery. With nuclear weapons, fear moved the US to develop them faster, at all costs. Only later did a different fear force the world’s superpowers toward regulation in the form of a non-proliferation treaty.
With AI, fear is moving countries worldwide, including the US, the UK, the EU, China, etc., to regulate the technology in a way that defies common sense: Instead of regulating applications and their deployment in the world, they intend to regulate knowledge and research. They failed to rise up to the challenge of social media on time, underregulating it dangerously, and are now overcompensating with AI, moved by the fear of making the same mistake.
No regulation, late regulation, bad regulation… fear is unpredictable in its intentions but following it blindly leads us to make bad decisions.
I don’t think anyone argues in favor of no regulation (except perhaps Marc Andreessen and the effective accelerationist crowd), what I see clearly is that some people are arguing to oppose that fear. Their position can be summarized by the principle that we shouldn’t let fear guide our beliefs, our behavior, or our battles, even when it’s prompted by the inescapable uncertainty of what the future holds.
When that happens, those who control the fear can do whatever they please. That’s why Yann LeCun and Andrew Ng, among a few others, are so relentlessly calling out the “lobbying for a ban on open AI R&D” by a few Big AI companies (and prominent researchers). They don’t want fear to win because hidden behind it, they argue, there are interests that benefit, in one way or another, a tiny minority.
For instance, companies like OpenAI, Anthropic, and DeepMind would effectively capture regulation in their favor (which they deny) and eventually control in full the AI sector. Even if that capture is a casual byproduct resulting from real fear, it makes them look deeply insincere. That’s what angers LeCun and Ng: the number of honestly fearful people is much smaller than it appears to be and wouldn’t be enough to convince governments to police AI this way — most of the apparently fearful, who are doing the lobbying, have personal, political, or economic incentives to tap into that supposed fear publicly.
Fear has, however, a reason to be.
It always points to a very specific danger ahead. If the nazis had created the atomic bomb first, who knows what would have happened and what the world would look like today (if there were a world to even look at). AI could indeed grow uncontrollably and become dangerous (either by misuse of malicious actors or by autonomous self-recursive improvement). That’s the ultimate danger the AI fear points at.
Neither LeCun nor Ng deny this. The fear is not completely irrational in that it reveals a non-zero possibility that the danger will become real. They accept that. But for them, this isn’t enough. They dismiss it as distracting at best (e.g., Big AI lobby) and as real yet manageable at worst. They firmly believe that we will find a way to control increasingly intelligent systems before they get too powerful — in their view, regulating them too soon, or too heavily, or too clumsily is worse than doing nothing.
Is not that the danger is imaginary but that the fear is overstated and deserves neither so much attention nor preemptive actions that could deeply damage, for instance, any open-source competition — GPT-4 looks today an unimaginably powerful AI model but in a couple of years it will be a toy, a “pocket watch,” in Andrew Ng’s words. By then, if the government manages to enforce the new regulation, open-source initiatives won’t even be able to play with what we’ll see as ridiculously outdated toys.
Of course, people like LeCun or Ng, who publicly and vocally oppose the leading AI risk and AI safety narrative have their own interests.
They don’t act moved by pure altruism. No one does. I’m not siding with them because I think they’re saints or act in the name of the greater good, or because they have access to a truth the rest of us ignore (I believe, as Hinton does, that there’s a lot of unexplored uncertainty in these questions). For instance, LeCun is a representative of Meta, which has labeled itself as the helper of the people with LLaMa and Llama 2 but that, as soon as you look it up, reveals a history of much darker motivations.
This discussion is not about who is good and who is bad. If I’m siding with them it’s because I think it is never a good idea, in politics or in life, to let fear win over our decisions or our actions.
Isn’t it peculiar how suddenly superintelligence has come to stand synonym for existential risk?
All I see is men fencing with hypotheticals and vague thought experiments, letting their imagination take them for a ride, warning the world about generation x of this technology.
I’m sorry but I don’t buy it.
Andrew Ng said it best in his post on The Batch: “When I try to evaluate how realistic these arguments are, I find them frustratingly vague and nonspecific. They boil down to “it could happen.” Trying to prove it couldn’t is akin to proving a negative. I can’t prove that AI won’t drive humans to extinction any more than I can prove that radio waves emitted from Earth won’t lead space aliens to find us and wipe us out.”
Fear is a very reasonable option when not wanting to die. We evolved it fot a reason.