Two things
TAB schedule: From now on I’ll publish TAB on Mondays (“What you may have missed,” still looking for a better name!), Wednesdays (instead of Tuesdays), and Fridays to avoid asymmetry, which no one likes.
AI as an existential threat: Debates about AGI as a vector of the end of humanity or the dawn of a new world are taking over the public conversation. I want to shed light on critical aspects that are rarely discussed in the media or by experts so expect more articles on this from me. Will weave in other topics to avoid monotony, which no one likes either.
Imagine this: In front of you there’s a big magical button. You happen to know that, if you press it, there’s an indeterminate but non-zero chance that you’ll solve all the world’s problems right away. Sounds great! There’s a caveat, though. At the other end of the probability distribution lies a similarly tiny but very real possibility that you will, just as instantly, kill everyone.
Do you press it?
(Btw, here’s a much more harmless button you can press instead:)
Superintelligence: Utopia or apocalypse?
That button is, as you may have imagined, a metaphor for the hypothetical AGI or superintelligence (will use them interchangeably) we hear about everywhere nowadays. The dichotomic scenario I described is the setting that so-called “AI optimists” and “AI doomers” have submerged us in. Superintelligence will be humanity’s blessing or humanity’s curse. It’ll be a paradisiac dream or a hellish nightmare. It’ll be the panacea to solve all our problems or the doom that will end human civilization.
Public discussions on social media and traditional media about superintelligence and the broad range of futures that will open up if—or when, for some people—we manage to create an AGI have captured the conversation; everything else pales in comparison. Debates about current, actual problems that are existentially urgent to many people are relegated to obscurity because they’re not as “existentially serious as … [AIs] taking over,” as AI pioneer Geoffrey Hinton stated recently.
It doesn’t surprise me, though, because what’s more pressing than deciding what to do next when, if we choose well, we can achieve the end of suffering instead of causing the end of the world? In this framing of the situation, Hinton is strictly right: his worries are more “existentially serious.” One step toward utopia is one step away from dystopia—and there’s no pathway in between.
But it feels that something’s missing, right? Let me get back to the button thought experiment for a moment. Maybe you noticed it: I didn’t mention what would happen in the vast number of intermediate scenarios where pressing the big button neither solves every problem nor kills everyone.
Rationalists like math and logic so here’s a logical truth hard to deny once we strip this question from the emotional charge that fills it: if there’s a tiny probability that pressing the button ends the world and another, equally tiny probability that it saves humanity—where both probabilities are an arbitrary product of our ability to imagine them because no one has any idea of how we’d get there—there’s necessarily an overwhelming majority of futures that are none of those two.
But hey, in the face of such world-changing outcomes, even if they’re minimally probable, why would we care about anything else, right? It’s reasonable that we deem scenarios more mundane than that—i.e., everything going on at present, actually—not worthy of much thought. This is, as crazy as it may sound, an increasingly popular opinion among AI experts.
From ‘Singularia’ to ‘Paperclipalypse’
Substack veteran blogger
wrote about this recently. He's not involved in AI as far as I know but maybe that's the very reason why he can perceive this strangely normal truth so clearly when AI people like Hinton, Altman, or Yudkowsky, don’t (want to) see it. He writes:“Consider the question posed by the latest podcast from the Free Press: Is AI The End of the World? Or the Dawn of a New One? … I choose this example not because the framing is extreme but precisely because it’s so utterly common. These are the only options: we face utopia or apocalypse. There is no alternative. No third path is possible. The very notion that the world is going to mostly go on the way that it has, always the best bet you can make, has been written out of the conversation. It’s unthinkable that we might all be forced to continue to stumble along in the same mundane world, caught up in the same hazy fog we’ve all been caught in for so long. That notion is not just deprecated. Through the force of affirmation, it’s rendered impermissible.”
I like that: “The best bet you can make” is to believe the world won’t change much. Taking this bet at pretty much any time in history would have turned out quite good for you. People have been warning about the end of the world at any turn, basing those claims on apparently real evidence or on blatant myths, but so far no one has nailed doomsday. They won’t stop trying, and now this game has moved from the realm of mysticism to that of rationalism.
It’s hard to deny that we’re living in special times, though, at least technologically speaking. Not just as a result of AI, of course, but this age isn’t like any other before—we’re in the exponential age. And, as the times we live, we’re, by force, just as special. It feels great, I guess, for many AI boosters and alignment “nerds” to believe they’re so lucky that they’ve won the historical lottery. Ours seems to be—either for better or worse—the most defining and definitive age for humanity. I think this fuels AI people’s hopes and fears that the future they’re crafting will be, borrowing Scott Aaronson’s nomenclature, either “Singularia” or “Paperclipalypse” which represent nothing less than AI-driven heaven and hell.
But, despite our unusual rate of progress, deBoer’s observation remains as probable as it can be: in lack of extraordinary evidence—which no one possesses—the safest bet is that the world will be as similar as it is, by definition. This is the possibility that covers the majority of post-button-press scenarios, which are being increasingly dismissed by the mainstream media and AI experts alike. Instead of Singularia and Paperclipalypse—unrecognizable worlds, as Aaronson classifies them—we’re much more likely to end up in their more realistic and similar-to-today’s-world counterparts, “Futurama” and “AI-Dystopia.” In these two, claims about superintelligence bringing the Singularity or the end of the human species are buried in the annals of science fiction.
This is how Aaronson depicts Futurama, which differs from AI-Dystopia in that the socio-economic and political consequences are good instead of bad:
“AI systems are still used as tools by humans, and except for a few fringe thinkers, no one treats them as sentient. AI easily passes the Turing test, can prove hard theorems, and can generate entertaining content (as well as deepfakes). But humanity gets used to that, just like we got used to computers creaming us in chess, translating text, and generating special effects in movies. Most people no more feel inferior to their AI than they feel inferior to their car because it runs faster. In this scenario, people will likely anthropomorphize AI less over time (as happened with digital computers themselves).”
Sounds plausible to me. But which one are we headed to, Futurama or AI-Dystopia? Aaronson acknowledges one more possibility—or rather, perspective—which probably matches deBoer’s view and that I see as the most reasonable prediction: there won’t be a consensus. Our world will be described by some people as Futurama-like whereas others will be certain it’s an AI-Dystopia (most people will lie somewhere in between).
This isn’t unlike what happens today. For some people, our civilization is undeniably going through the best era in terms of general well-being and quality of life. For others, it’s nothing short of a “neoliberal capitalist dystopia,” as Aaronson puts it. One reality, endless perceptions. If there’s one truth we can all agree on is that even if the world changes, humans won’t.
Public awareness, forced extremism, and ignorance-driven arrogance
And this takes me to the last section, which is about humans more than AI. Because if that’s the case; if an inherently subjective experience of our shared reality is what will prevail in the future, regardless of where AI progress takes us, then why has this artificial dichotomy between utopia and apocalypse grown so ubiquitous in AI discourse? Radicalization of opinion occurs spontaneously, at least to some degree, in relevant and timely topics but I haven’t seen it happen so clearly and overwhelmingly before at the expert level. I believe the reason lies in an unfortunate combination of three things: public awareness, forced extremism, and ignorance-driven arrogance.
For some reason, the experts and the people-who-can-make-things-happen groups barely overlap (not only in AI). AI experts, many of whom have become well-known public figures in recent years due to AI being a gravitational center of interest and attention, devote a lot of energy to expressing their beliefs out loud in order to generate a force in their preferred direction—and influence those who enact change.
However, in doing so they’re faced with a crucial dilemma: If they state their beliefs as they hold them in private, they will be truthful but often less-than-optimally effective. If they, instead, exaggerate those beliefs slightly (or not so slightly) in an attempt to dominate the opposing faction, they could gladly earn a greater chance of achieving—in their opinion—a positive impact in exchange for some extremism.
This is what happens: Public awareness of AI affairs presses experts to lay out their arguments so their odds of propitiating the creation of concrete and enforceable policies increases. This, in turn, faces them with the dilemma—and I believe most take the second route even if unconsciously—which creates a weird, almost black-and-white landscape of arrogant and overconfident opposing takes backed by a deep ignorance (not theirs, but of the whole field, as I said about Hinton’s recent worrying statements) about our ability to achieve AGI and their individual ability to predict the future.
As a result, many people—other researchers, potential investors, interested analysts, journalists, enthusiasts, and even politicians and policymakers—drift away from the center of the conversation. Some motivated by a conscious decision to outweigh the other faction in defense of their beliefs. Some unconsciously moved by those they follow and trust. And some simply as observers, like Bari Weiss of the Free Press, who further propagate the state of affairs with the sole intention of informing their audience, not pretending to take part, but taking part nonetheless.
Why is this detrimental to scientific inquiry and, indeed, to anything we collectively try to achieve as humanity? Because by trying to win the dialectic-turned-rhetoric fight in an us-vs-them fashion, they are causing neither to get it right and thus reducing our ability to “steer the wheel of progress toward a better future for all,” as I concluded in my previous article.
One of the unpopular opinions I hold is that the scientific community plays a critical and irreplaceable role (although it has important, untackled problems, e.g., it’s dominated by white men). Laypeople lack the criteria, knowledge, and background to decide over matters like the present and future of AI. In taking these conversations to the public town square and debates about the questions they pose to social media we are irremediably undermining AI as a scientific endeavor.
Here’s the only solution that I see, doubling down on the conclusion I arrived at in my latest essay: To repair the damage and undo the artificial dichotomy between utopia and apocalypse, AI experts must reject the intention to convince and persuade which drives them to the extremes and they must be willing to admit their ignorance and lack of confidence in their ability to foresee the future.
Last week I was more hopeful. Today, I’m less so. I don’t write this expecting it to happen. But I, seemingly in contrast to many AI experts raising their voices lately, acknowledge that what I think at any given moment is hopelessly influenced by how I feel. So here’s my final piece of advice: they should prevent their emotions from taking over their otherwise lucid minds; because, in the end, we’re just humans.
Alberto, great post! I coincidentally wrote on very similar lines as yours with my Substack post yesterday: https://trustedtech.substack.com/p/trusted-ai-005-our-average-future
I used Google Translate (and machine translation more broadly) as my example of a technology that was also hyped as world-changing and utopia-bringing and ended up...not.
My understanding is that the 1956 conference took it on itself to decide explicitly what the new field was going to be called. The two candidates were "Artificial Intelligence", which Marvin Minsky argued for, and "Automatic Programming". I have no idea who championed that alternative but Minsky won the day and we have dealing the consequences ever since.