The Missing Piece in Geoffrey Hinton's Newfound Fear of AI Existential Risk
In science, unknowns should be treated as such
The most relevant AI-related news of the week is that Geoffrey Hinton, the “Godfather of AI,” has resigned from Google Brain. The reason: He wants to warn us of the danger ahead. After the NYT scooped the story on Monday, news outlets—newspapers, and TV channels alike—have continued to interview Hinton, echoing his worrying statements. Critically, most people—both journalists and audience—have misinterpreted or mischaracterized (or both) Hinton’s fear-inducing claims in a way that radically changes the story. And its repercussions.
It isn’t a stretch to say that Hinton is the Einstein of artificial intelligence. AI and machine learning, not unlike physics, are scientific disciplines fueled by collaboration and cooperation. Many crucial names besides Hinton have contributed to a similar degree. But it was his stubborn opposition in the 80s to the then-leading paradigm—symbolic AI—in favor of the more biologically-inspired connectionism framework (i.e., neural networks), that eventually got us here. Modern deep learning, which turned extremely successful in the last decade (way more than its detractors would have hoped), is the fruit we’ve collectively reaped thanks largely to the seeds Hinton planted 40 years ago.
Hinton didn’t invent neural networks, and, contrary to common belief, he didn’t invent backpropagation either (he made it work for deep neural networks, though) but he’s been an essential figure in AI. If I were asked to name the person (alive) whose contributions have influenced the 70-year-old field the most, propitiating the creation of state-of-the-art systems like ChatGPT, and who—for better or worse—holds the most responsibility for the popularization of AI and thus the spread of existential risk (x-risk) fears, that’d be Hinton.
The x-risk of AI—the possibility that a sufficiently advanced form of AI (often deemed a “superintelligence”) wipes out humanity either intentionally or not—is my focus today. Hinton’s recent claims on this kind of risk are, albeit surprising, reasonably among the most valued due to his expertise and authority (Eliezer Yudkowsky, for instance, has warned about AI x-risk for way longer but he’s not an AI expert and definitely not as respected as Hinton). Hinton had never publicly expressed his fears before but is now crystal clear, giving us all a motive for thoughtful reflection.
Yet, I can’t help but realize that my reading of his worry is different than that of most people I’ve come across. I think the conversations and debates prompted by Hinton’s words on AI x-risk are necessary but seriously flawed. And I’m not talking about a philosophical conundrum on the definition of the concepts. It’s simpler: If we solely consider the magnitude (how afraid he is) and commonality (how many experts share the opinion) of his newfound beliefs, we’d miss the third leg of the equation; the confidence he has on whether those fears will materialize or not.
The truth is that neither he nor anyone else knows barely anything about what we should expect—how can we try to predict and unveil the unknowns of AI’s future when the present is largely a mystery? In short: He’s not worried because he knows, but because he doesn’t know. Let’s see what this means and why it matters so we (and journalists and analysts) can better frame the situation and our reactions to it.
Keep reading with a 7-day free trial
Subscribe to The Algorithmic Bridge to keep reading this post and get 7 days of free access to the full post archives.