Here's Why People Will Never Care About AI Risk
It is an irrational fear but people are afraid of stupider things
I think a big problem w getting the public to care about AI risk is that it’s just a huge emotional ask — for someone to really consider that there’s a solid chance that the whole world’s about to end. People will instinctively resist it tooth-and-nail.
People instinctively resist the idea that the world could be about to end. I agree with that. For three reasons: First, we don’t want to die, and the brain, if anything, is an apt defense-mechanism-creating machine. Denial is universal. Second, we can’t — literally — imagine a “no-world” reality where our beloved Earth doesn’t exist. Third, we are very bad at imagining unprecedented events and unprecedented change.
AI risk is a huge cognitive ask we can’t afford
All that applies even better to threats other than AI. A meteorite, an alien invasion, a deadly pandemic, the sun swallowing our collective home in a spectacle of fire, energy rays, and indifference. But there’s a different reason that makes AI-driven existential risk harder to imagine than any of those.
Besides being a “huge emotional ask,” as Schmidt puts it, AI risk is a huge cognitive ask.
Unless you’ve spent a disturbing amount of time thinking about this and have read arguments in favor and against AI risk and AI safety, including those Yudkowsky Sequences, it’s very hard to imagine how we go from ChatGPT — a chatbot that’s best analogized by a super fast, eidetic memory dumb intern, as Ethan Mollick likes to say — to a superintelligent rogue AI that would wipe us out by quietly synthesizing a nano-pathogen in the water supply.
The mental gymnastics one has to do to follow the chain of reasoning that leads there and believe it deeply enough to break our resistance to not wanting to die is too much. People will never care about AI existential risk as much as say, climate change or nuclear war, because it’s intrinsically abstract to think about it. It feels too far, too improbable, too cognitively demanding. Something that only lives and will ever live in philosophical discussions that have nothing to do with our mundane affairs.
We don’t like that — we prefer simple thinking, not too deep if possible, and definitely, the kind that doesn’t require insightful imagination. That’s not to bash us as a species, it’s just a logical continuation of the minimum energy principle and the unavoidable fact that each of us has more than enough on our plates to be thinking about abstract doomsday scenarios.
Climate change and nuclear war scare us, though
But then, why are climate change and nuclear war easier for us to perceive as catastrophically dangerous? Even when thinking about them puts an emotional tax on us, we do it from time to time. Some people, all the time. Perhaps not super seriously — who really, truly thinks about their own death? — but seriously enough.
Climate change is derived from a mix of scientific observations and theoretical predictions so unless people have direct access to those and the knowledge to deduce the implications themselves, they only believe climate change is real because someone has told them. A reality built on second-hand testimony.
That’s fine, though, because climate change theories (in contrast to imagining a superintelligence making paperclips with the sun’s energy) predict effects that we can feel first-hand. If year after year summer is hotter, that’s a good proxy to start believing climate change may burn us to ashes eventually. Or, at the very least, that it will create a potentially catastrophic collective social burden in the form of climate refugees.
But is it an unusually hot summer enough? What about world-destroying evidence? The global repercussions (e.g. unusual natural phenomena) are constantly broadcast via TV or the internet. It’s a very visual and perceptual issue: a rare volcano eruption, rare floods, rare storms… as long as you are open to believing it, it’s easy to find direct and indirect evidence without thinking much, even if our survival instinct forces us to not consider this a certain threat until it hits us in the face.
Nuclear war, same thing. Super visual. The US dropped two atomic bombs on Japan and we all know the literal repercussions of being dropped a nuclear bomb (how many movies, books, and documentaries have been made about the Hiroshima and Nagasaki bombings or about a hypothetical nuclear war?).
Also, we have a very intuitive understanding of what’s an explosive weapon. The news shows those all the time. Even if the scale of a nuclear explosion is hard to imagine, it’s much easier for our limited minds to imagine a quantitative extrapolation (e.g., making a bomb larger) than a qualitative one (how do even being to imagine the mind of a thing that’s thousands of times more intelligent than you are, really).
What about killer robots?
A counterpoint to this hypothesis is science fiction popular culture. Hollywood movies like The Terminator, The Matrix, or 2001: A Space Odyssey provide a very visual depiction of AI-driven humanity extinction scenarios.
My reason for rejecting this as a potential vector for people to consider seriously the AI existential risk narrative is that the storytellers trying to make it into a generally accepted fiction explicitly reject the robot killer idea: If AI kills us, they say, it won’t be a badass looking shiny metallic robot with a machine gun and sunglasses on. Instead, it will be silent, accurate — alien in form, motivation, and methodology.
I mean, let’s be honest. We care about modern AI but most people don’t. I’m having a hard time convincing my friends of the importance of AI now or in the short term. No one uses GPT-4. Almost no one uses ChatGPT. No one knows anything about what’s happened in the year since it was released. These things don’t even require beliefs, trust, faith, or predictive prowess — just looking around.
Not even those things, absolutely obvious to you and me, penetrate people’s barriers of everyday normality or their inability to accept that the world is changing faster than ever before.
So yeah, AI risk will probably remain a niche topic until it dissipates into nothing.
Or until a rogue AI kills us all.
It doesn’t really matter either way.
All such fears of death dealing scenarios are based on an assumption that life is better than death, a belief for which there is no compelling evidence, let alone proof.
There are plenty of theories about death, held with various degrees of conviction, but none of us actually know what death is. Nor does there appear to be any way for us to know, given that would require us traveling to death and then returning to life to file a report. Some people feel near death experiences fit this description (which typically provide a quite wonderful view) but then near death is reasonably defined as not being actual death.
We don't even really know what we mean when we say "I will die". Who is "I"? The philosophers have been asking "Who am I?" for centuries, and yet this question remains unanswered. We typically assume that we know who "me" is, but rarely give the matter any serious examination.
We might say that there are layers to "me".
1) When we look at a human being what we typically mostly see is their clothes. But of course the clothes are not "me", but just a discardable exterior shell.
2) Under the clothes is our body. Medical science is increasingly able to replace nearly every part of our body. And, I've read that every cell in our body is naturally replaced every seven years or sooner. So it seems our body is not "me", but just another layer.
3) Under our body is our mind, our thoughts, memories, dreams, fears, opinions and beliefs, personality etc. Are our thoughts "me"? If yes, then it could be said that we don't really exist in the first place, as our thoughts appear to be just a pattern of relationships between neurons, and not an actual physical "thing". If we don't actually exist, then how would we die??
4) Is there another layer to "me" below our thoughts? Some people have theorized there is something like a soul, and that is who "me" really is. Does this soul "me" die? Again all we have is more theories lacking any proof.
When it comes to the question of what death is we seem to live in a state of ignorance. If that's true, what is the rational response? It seems to me the rational response would be to first acknowledge our ignorance, and then embrace some story about death which enhances our living by removing some of the fear. As example....
Personally, I'm attracted to the reports of near death experiences, which typically are very positive. People who have had such experiences often report that they were disappointed that they had to come back to life. But I wouldn't attempt to sell the near death experience story to others, as I have no proof, and it's better that each person should come to their own preferred positive theory about death by their own methods.
What's perhaps interesting is that in a state of complete ignorance one is liberated from facts, because there are no available facts about what death really is. And so one's perspective on death can not be measured by it's relationship to facts, but instead can be measured by what value one's perspective delivers to our living. Put another way, religion is not science, but rather a different enterprise with it's own unique value.
I've begun to suspect that our ignorance and fears about death are a necessary mechanism for maintaining life. If we knew for a fact that death is wonderful, as some people claim, why would we bother with the challenges of living? And so one's personal perspective on death should be positive, but perhaps not too positive. Should I kill myself if I have tooth pain? Well, perhaps not just yet, given that my positive death story is built upon ignorance.
Alberto is right, it's rare for us to give such matters much thought. And perhaps that's because there is a much easier method of dealing with death, keeping ourselves so busy and distracted that we don't have time to think about it much, and thus can more easily reside in the bliss of denial. There's a logic to this approach too.
Great piece. Totally agree except for the part where I think it's important to talk about it even if it falls on deaf ears. Let's hold out hope that enough people with power to do something are sufficiently exposed to dialogue around AI risks — to the point that some of them may in fact pull back or otherwise take precautions. Pausing could backfire, true, but it could help. And I much prefer "going down trying" to nihilistically observing the incoming train wreck with a resounding "Oh well, nothing we can do."