I hadn’t planned to write about the CAIS statement on AI Risk released on May 30, but the press goes crazy every time one of these is published, so I won’t add much noise to the pile regardless. I still wouldn’t have posted this if I didn’t have anything to say to complement the takeaways I’ve seen on Twitter and the news. But I do.
The existential risk of AI has recently become a constant focus for the community (other risks are included but as fine print). The explanations I’ve read elsewhere for why that’s the case are incomplete at best. Loose ends are common: if Altman just wanted to get richer, why does he have no equity in OpenAI? If he just wanted political power, why was he openly talking about superintelligence before OpenAI? If everything is about business, why are academics signing the letters?
I recognize this topic isn’t easy to analyze: it involves remarkable scientific progress (at least in kind if not also in impact) combined with unprecedented political and financial tensions that interact with the individual psychologies of the people involved—both researchers and builders—not to mention that they may partially hide their beliefs to protect them from public scrutiny, difficulting a clean assessment.
The position I present below is perhaps more assertively worded than it deserves, but the growing importance of the subject demands it. I’m prepared—and eager, you’ll understand why—to be proved wrong. Anyway, this imperfect digression will hopefully help clarify part of the fuzz, illuminate some obscure motivations, and add some value to the conversation. (Let me know what you think in the comments, especially if you disagree!)
The perceptive cynicism arising from mundane forms of selfishness, hypocrisy, and avarice faces now a conundrum it has rarely—if ever—encountered before; the dogmatic prophets of a new cult disguised as a technological revolution.
The cynics, adept in recognizing the underlying wicked intentions behind otherwise pristine appearances, are convinced this isn’t but another version of the same thing: A sugar-coated story about progress and societal benefits that masks a quest for money, fame, influence, and power. More often than not, cynics get it right. But to a hammer everything looks like a nail—they’ll see maliciousness where there is none and, in some rare cases, fail to see a deeper issue hiding behind the obvious greed. AI existential risk stories are, to me, instances of the latter.
The people I’m referring to (whom I won't attempt to classify because no label can encompass them all) are looking for—or running from, it depends—something beyond the prosaic rewards of political and economic success. They're not faking worry and fear to deceive us. They're not dismissing “lesser” risks to divert attention. They don’t need to influence regulation to triumph over the competition or PR gimmicks to enrich themselves. Only to the extent that these tactics are instrumental to their ultimate purpose—often, in fact—they’re useful. But no more than that. Mere intermediaries to a greater end; an honor that only religion can provide and only history can immortalize.
Most interpretations of their letters and statements, as well as of their actions, make the grave mistake of mischaracterizing them as more simplistic, more understandable—easier to handle or counter if necessary—than they really are. They're of an unusual kind with an uncommon faith. A faith built on superficial but hard-to-debunk logic by which they've convinced themselves to be modern-day foretellers of a future that, at its best is singularly glorious and, at its worst, our closing chapter.
They don’t chase after money, fame, or influence albeit they get those. They want to be either the hosts of their silicon God or, failing that, the oracles auguring the advent of an artificial demon. They want to be the architects of a future that in its inevitability can only be embraced, in its intractability must capture all our efforts, and in its desirability should be brought about faster—all those things mixed together into a rather incomprehensible simultaneity.
The cynical view—which I can hardly avoid because it reveals how the world works most of the time—portrays them as liars concealing their true motivations. Well, they aren't. They aren't because their intentions need no hiding; the sheer novelty and unbelievability of the soon-to-be scenarios they imagine shield them from the typically ugly connotations that affect money-seekers and power-seekers. They’re better depicted as would-be heirs—some as creators, some as enablers—of Project Manhattan's scientists, who unknowingly created an unimaginable power capable of annihilating us all. But our contemporaries are doing so willingly, knowing there’s a promised paradise if they get it right. And if they don’t, at least they will have tasted the sweetness of knowledge.
The mainstream readings of AI safety and AI alignment’s beliefs fail to see the big picture—and the true danger it entails if we were to fall for their doctrine, which albeit dressed as science isn’t but a hypothetical, fictional fantasy that arises from both a sense of self-importance and self-responsibility (with touches of self-blame). Their logically-driven motives are obscure to outsiders but the danger they pose isn’t: they’d like to see the world submit all its battles, all its endeavors, all its hopes to the One Last Problem we’d ever face. For better or worse.
The world is full of beauty but also crippled with suffering. For the most extreme proponents of this view that’s unimportant (thus their frontal opposition—or silent dismissal—to putting other risks at the same level of urgency or even devoting any resources to mitigate them). They want us to work first and foremost on the Big Problem so that its intrinsic existential risk (which, as it happens, would also affect them) can turn—once they succeed—into the panacea in the form of a huge computer.
I don’t think they’re dishonest—they believe what they say—but they’re deceiving us anyway with their quasi-religious conviction. For me, that’s worse. It's hard to defend against a group of powerful and influential voices that proselytize their forecasts from a place of rational certainty mixed with visceral reaction. They can’t unsee what they believe they’ve seen. Logic traps. Fear paralyzes. And power corrupts. They're immersed in zugzwang—to get out they’d have to either betray their logic, bury their fears, or give up their power—so we shouldn't expect cooperation from them to untangle this mess that’s slowly abducting more and more people.
To finish, let me give you first, a qualification of the above commentary, and second, a somewhat optimistic conclusion to the otherwise dreadful analysis.
First, among the people who signed the letters, there are many who don’t fit the story I’ve just told (that’s why I didn’t name names). I presented the story in its strongest version so that it captures even the most inexplicable claims and behaviors we’ve witnessed these months (especially after ChatGPT but also before). Most signatories, however, fall on milder positions on the spectrum that goes from “superintelligence is a myth” to either “we’re all gonna die” or, at the other, equally unrealistic end, “the universe will be at our feet.”
Second, if my conclusions are correct and this truth—whoever adheres to it—eventually becomes evident to all of us, we'd be in a much better position to decide what we want to do with this long-term AI risk doctrine. Some may want to follow it. Others, like me, may want to reject it and focus on the present and near-term effects, risks, and dangers—and all the positive aspects, because there are many—of artificial intelligence systems.
The only way to choose wisely is by understanding first which motivations move the people making the claims. They may not be lying as some believe, but truth can, sometimes, hide in plain sight.
I don’t think it’s worth discussing any of the AI participant’s motives. Whether an existence threatening AI is here with LLM and ChatGPT-4, Bard and such or whether it is yet to appear, one is surely coming down the road to us, and it’s more important to recognize that and prepare for that occasion in the best way possible.
Whatever the first existence threatening AI turns out to be, here or not yet here, it is clear that anyone who has sufficient intellectual skills, or access to such people, can develop similar software. Like any tool, it can be patented and its commercial use regulated, but individuals and groups can still build their own versions. This aspect of AI development remains far beyond the reach of any government regulation. Moreover, the realm of thought itself cannot be regulated. So, how can we prevent an AI catastrophe?
In many ways, it feels as if we are faced with a situation like those that faced native American Indians, Australian aborigines, African or other primitive tribes being overshadowed by a more sophisticated culture. Only a culture with at least the same level of sophistication can exert any control over another culture, and in a contest with existence threatening AI, the only equivalent culture to which we will have access is AI itself. It seems that AIs will eventually engage in a battle for supremacy over this territory, leaving humanity akin to mice on the Titanic or groundhogs in the fields of Flanders—small and inconsequential.
This appears to be a strong argument against halting the development of AI. If this is to become a battle between AI’s, it is certainly in humanity’s interest to have the best AI on our side, which can only happen if we develop it before the bad guys, whoever they may be, gain access to it.
I think that’s a discussion worth pursuing.
Very well put. The idea that encountering an entity more intelligent than us represents an existential threat to our species at best is a strange new kind of xenophobia. It also exposes a somewhat oppressive and malevolent view of intelligence.