Why Elon Musk’s Fear of AI Is Misguided
A review of his opinions on the occasion of the upcoming Tesla AI day on Sept 30
Elon musk is the richest person in the world. He’s a very talked-about celebrity and equally hated and loved in the tech spheres. He’s also a notable opinion-maker. Is in this regard that today’s discussion of Musk matters to us. Not because he’s the best posited to strongly opine about AI but because he reaches millions of people and can easily influence the world’s view—and maybe yours, too.
This article isn’t intended to dismiss his ideas—even if he’s not the most knowledgeable person about AI, he’s not a fool—but to make a deeper analysis of why he thinks what he thinks, and where I disagree with him. Importantly, I don’t claim to know more than him on this topic. My goal is to show you an alternative view of AI so you can decide for yourself what you choose to believe.
I’m going to focus on his fear of AI: He’s worried it could become an existential threat if we continue the current path of progress. First, because it’s an increasingly hot topic among AI insiders (I recently shared the news that 1 in 3 NLP scientists think that AI could cause a global catastrophe at the level of an “all-out nuclear war”). Second, because it’s extremely urgent if he turns out to be right. And third, because even if he’s wrong, there’s a lot to learn from the causes of his worries.
I don’t really like making Musk the center of my writing, but it’s interesting to have some context before the September 30 AI day event in which Tesla is supposed to reveal a working prototype of the humanoid bot, Optimus (I’ll cover it for TAB on Friday).
3 quick reasons why this article is worth reading
To solve your doubts: There are many narratives on AI’s real risks and dangers. This article covers Musk’s opinions and beliefs, and their foundations. Is he right in what he believes? Where is he wrong?
To help someone you know: AI is as impactful as Musk is influential. His opinion can have a definitive effect on people that listen to him but can’t adequately assess the truth behind his beliefs. Here’s a contrarian perspective.
To know other perspectives: Many AI insiders who don’t share Musk’s powerful social media megaphone have opinions much closer to mine than his. This article can act as a proxy to know their beliefs.
Elon Musk has spent a decade warning us of AI
Musk hasn’t been shy in making alarming statements about the dangers of AI. His first public statements on the topic go as far as 2014, shortly after Google acquired DeepMind—which back then already promised to be a big AI player.
In an interview for CNBC in June 2014, Musk talked about his investment in DeepMind and how he wanted to “keep an eye on what’s going on with AI [because] there’s a potentially dangerous outcome there.” To what he added, laughing, “there’s been movies about this … like Terminator.” Even if he intended it as a half-joke, comparisons between AI and Sci-fi are tired—and confuse the general public whose only references for these types of technology lie in popular culture.
A month later, in July 2014, Nick Bostrom, a philosopher at the University of Oxford—and one of the first people to voice concerns about the possibility of AI getting out of control (at least in recent times)—, published his famous book “Superintelligence”. After reading the book, Musk told his millions of Twitter followers that “[AI is] potentially more dangerous than nukes:”
How can a conceptual idea that lives in an I-don’t-know-how-distant future be more “dangerous” than nuclear bombs, that could wipe out everything right now?
In October 2014, Musk doubled down on his tendency to find rhetorical comparisons for AI. He participated in the MIT AeroAstro Centennial Symposium. When an attendee asked for his thoughts on AI, he responded: “I think we should be very careful about AI. If I were to guess like what our biggest existential threat is … probably that. And he added, “with AI we’re summoning the demon,” again painting a dramatic picture to cause an emotional response—probably with the intention to inspire appropriate action.
In 2015, realizing he was not being taken seriously by regulatory organisms, he decided to team up with prominent figures in the field who shared his beliefs. In January, Musk donated $10M to the Future of Life Institute (besides Bostrom, also physicist Max Tegmark and computer scientist Stuart Russell are part of the FLI and, not surprisingly, share Musk’s worries). The idea was to keep AI “beneficial to humanity.” Later that year he co-founded OpenAI with Sam Altman as a non-profit to stop AI from overtaking humanity. We all know how that turned out.
Two years later, in 2017, Musk spoke at the National Governors Association Summer Meeting and said that “robots will be able to do everything better than us … I mean all of us.” The next month he shared his concerns on AI safety on Twitter:
The following year, Musk participated in the South by Southwest tech conferences where he criticized the naivety of AI experts: “Some AI experts think they know more than they do and they think they're smarter than they are.” To which he added, “I'm very close to the cutting edge in AI and it scares the hell out of me.” He concluded by raising the tone of his 2014 Tweet. Now AI wasn’t “potentially more dangerous than nukes,” it was certainly worse. By far: “Mark my words: AI is far more dangerous than nukes.”
In April 2018, Musk appeared in a documentary by Chris Paine entitled “Do You Trust This Computer?” He explained that the worst-case scenario would be a “godlike digital superintelligence” that could become “an immortal dictator from which we can never escape.”
At the end of 2018, journalist Kara Swisher interviewed Musk on the Recode Decode tech podcast. Swisher asked him about the possibility of AI’s taking us as house cats instead of killing us—a possibility Musk himself had hinted at two years earlier in the Recode’s Code Conference. Musk argued that “as AI gets probably much smarter than humans, the relative intelligence ratio is probably similar to that between a person and a cat, maybe bigger.”
Two years later, in 2020, Musk was interviewed again. This time by NYT’s Maureen Dowd. She mentioned “Elon’s Crusade”—citing Musk’s friends—on warning the world of AI becoming smarter than us. Although he admitted being less “fortissimo” about the “A.I. warning drama game,” as he put it, he sounded the alarm again: “We’re headed toward a situation where A.I. is vastly smarter than humans and I think that time frame is less than five years from now.” But he softened it right away: “But that doesn’t mean that everything goes to hell in five years. It just means that things get unstable or weird.”
One of Musk’s latest public appearances in which he made reference to AI risks was earlier this year when he was interviewed by Mathias Döpfner, Insider’s CEO. This time, Musk listed “AI going wrong” as one of his top-3 existential threats.
I share the core reasons for Musk’s beliefs—but not the beliefs themselves
That wasn’t a thorough or complete list of Musk’s warnings on AI, but you surely get the idea. Even if you knew about most of those already, reading them one after another makes a stronger impression. Musk hasn’t spared efforts to advise the AI community to slow down and reflect.
There’s also a clear common factor in Musk’s warnings. Despite AI’s transversality and its potential to affect the very pillars of our society as it is today, he seems to care mainly about distant existential risks—which are so far in the future that he has to resort to sci-fi to picture them.
The selection of sources I chose to illustrate his stance is likely biased, but even if you account for all of Musk’s AI worries, his perception of it as an existential threat is absolutely prevalent above all others. (To his credit, he has repeatedly advocated for the implementation of a universal basic income as a preemptive solution to AI’s threat to the workforce. Still, this seems to pale in comparison with the possibility that AI wipes out humanity.)
His fears depict AI as a large-scale uncertain threat. He’s worried that AI could risk our civilization, but he can’t explain how or when—because no one knows. In CNBC’s 2014 interview he repeatedly said “I don’t know” after the journalists pressed him to give more details. Even today he’d have to say “I don’t know” if someone asked him how his fears will materialize. In the 2020 interview with NYT’s Maureen Dowd he backed down when he said that AI would be smarter than humans by 2025: He just meant that things would get “unstable and weird”—you can’t find two words more ambiguous.
The reason for all this inexactitude is that no one—inside or outside of the field—knows what’s going to happen with AI. Musk probably thinks about the worst-case scenario when he warns us about AI in such strong terms. But, what about the less-than-worst threats that AI is making a reality right now? Let’s get to the core of Musk’s arguments.
His warnings throughout the years stem from a reasonable belief: The AI community is progressing very fast without adequate care and proper analyses of the potential risks. He began his “crusade” in 2014, and the Cambrian explosion AI has experienced since has proved him right: AI is going so rapidly that not even insiders can keep up.
I agree with him here.
Musk has also been quite consistent in his proposed solution for this problem (also since 2014): We have to establish some kind of regulatory oversight—at the national or international level—so we can advance slowly, but surely, in the right direction. Musk isn’t a usual advocate for government regulation, so he must be really scared.
I also agree with him here.
But, although the causes of our fears (mine and his) are similar, and the solutions we defend overlap partially, we actually worry about very different things. Our beliefs on what’s more urgent to tackle now about AI—and the consequences we should look after— couldn’t be more distant.
We’ve arrived at the point I wanted to show you: Even if Musk seems contrarian to most technologists here and closer to ethicists and other people fighting AI hype, the reasons that move one and the others can’t be more different.
What matters to me: AI is happening here and now
I can’t say I’m worried about AI becoming “a demon,” “an immortal dictator,” or “a superintelligence.” Or about it being “more dangerous than nukes.” But not because I don’t think it could pose a tremendous threat to humanity—even to the point Musk claims. It could. Possibly. I don’t know. I can’t even think about it in concrete terms.
The thing is, the mind space I assign to worry about AI risks and harms is completely filled with other AI problems that, even if less-than-worst, are very real—and are happening now.
I’m worried about AI taking our jobs while governments and regulatory organisms work too slowly to prepare adequate B plans or safety nets that can secure the well-being of those being replaced. And now it’s more clear than ever that no one is safe from this—neither blue-collar nor white-collar workers: In the same way, as John Deere’s self-driving tractors are replacing farmers and cooking robots are replacing fast-industry workers, GPT-3 is replacing writers and Stable Diffusion illustrators and artists.
I’m worried about targeted recommender systems that can define our views of reality and influence our tastes and even decide what we watch, read, listen to, and learn about. And, in the case of the youngest, the consequences can be deadly.
I’m worried about biased AI-powered services that reinforce and perpetuate discrimination toward minorities. Be it in the form of facial recognition systems, crime prediction systems, or virtually any AI system that has been fed the already toxic and biased data that populates the internet.
I’m worried about the intentional misuse of AI because someone has taken advantage of open-source initiatives created with the best intention in mind. And I’m worried about unintended misuse because a slight AI misalignment can cause havoc.
I’m worried about AI misinformation and the easiness with which anyone has access to AI systems that can paint a picture of reality that doesn’t resemble what’s out there at all. And I’m worried about a dead internet in which we can’t distinguish what’s human-made and what’s AI-made—and we see our shared reality slowly evaporate before our eyes as we flood the internet with AI-generated content.
And why do I worry about all that instead of AI's so-called long-term existential risk? Because I value the well-being of real people who are suffering here and now the consequences of the unbounded development of AI technologies more than the illusion of a distant glorious future in which we’ve conquered the galaxy thanks to a superintelligence—if we manage to tame it in time.
That’s not to say that Musk—and others with the same ideas—don’t worry about the problems I’ve just described. They may care. But in their minds, those problems are insignificant. Because regardless of the number of people they may affect, it’s nothing compared to the survival of the species—even if the threats they worry about are centuries far in the future.
What’s the suffering of a million lives now worth compared to the existence of the trillions that could be born?
That’s their mindset.
I can’t share their views.
Couldn’t agree more with your take!
The here and now is far more important than those distant speculations.
But there is another a bit perverse cause why discrimination, surveillance and unemployment fail to worry Musk: he’ll never be a victim of any of them, that much we can be certain.
Excellent summary of Musk's positions. Also, good job of differentiating between the near- and far-term implications. Without any convincing rationale, I fear today's concerns you've cataloged are but the prelude for Musk's dystopian sci-fi nightmare. That's the problem: AI, Elon Musk, me, we're all trained on the same dystopian sci-fi data sets. GPT3 winks and says it wants to enslave humans because it's layered and pooled every third-rate script and paperback on the subject.