What You May Have Missed #30
AI risk statement / AI journalists doing a good job? / Research (Social Turing Test) / Product (courses by Google & Andrew Ng) / Articles (The Illusion of China’s AI Prowess) / Misc (Is GPT-4 worse?)
CAIS Statement on AI Risk
CAIS Statement on AI Risk: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Some of the signatories are among the highest-profile figures on AI, e.g., Geoffrey Hinton, Yoshua Bengio, Sam Altman, Demis Hassabis, and Dario Amodei. Dan Hendrycks (CAIS director) says “AI researchers from leading universities worldwide have signed the AI extinction statement, a situation reminiscent of atomic scientists issuing warnings about the very technologies they've created.”
Important addition on CAIS LinkedIn post announcement: “As indicated in the first sentence of the signatory page, there are many “important and urgent risks from AI,” not just the risk of extinction. AI poses serious issues in the form of misinformation, deepfakes, bias, lack of transparency, job displacement, cyberattacks, phishing, and lethal autonomous weapons. These are all important risks that need to be addressed. Societies have the capacity to manage multiple risks simultaneously; it’s not about choosing "either/or" but embracing a "yes/and" approach.”
Reactions and comments; mostly contrarian takes that I consider worth listening to (the arguments in favor of signing are pretty clear):
Yann LeCun: “AI amplifies human intelligence, which is an intrinsically Good Thing, unlike nuclear weapons and deadly pathogens. We don't even have a credible blueprint to come anywhere close to human-level AI. Once we do, we will come up with ways to make it safe.”
Emily M. Bender: “We should be concerned by the real harms that corps and the people who make them up are doing in the name of "AI", not abt Skynet.”
Noah Giansiracusa: “My view: AI will have many harms and benefits and we should tread cautiously as society attempts to navigate that delicate balance. This statement's view: AI could kill us all!! So let's keep doing AI but try to make it less likely to kill us all.”
Ian Goodfellow: “I've spent several years studying machine learning security with the goal of making ML reliable before it is used in more and more important contexts. Unfortunately, ML capabilities and adoption are growing much faster than ML robustness.”
Gary Marcus: “Mitigating AI risk should absolutely be top priority, but literal extinction is just one risk, not yet well-understood; many other risks threaten both our safety and our democracy. We need a balanced approach.”
Deb Raji: “Why should something need to be framed as an ‘existential risk’ in order to matter? Is extinction status that necessary for something to be important? The need for this ‘saving the world’ narrative over a ‘let's just help the next person’ kind of worldview truly puzzles me.”
Hardmaru: “While the rest of the world is focused on runaway AGI existential ‘risks’, large scale machine learning is systematically applied by authoritarian governments to cement total control of the population.”
Brian Merchant: “No one is making Google and OpenAI develop AI that puts humanity at ‘risk of extinction.’ If they honestly thought it was such a dire threat they could stop building it today. They do not, so they won’t.”
Seth Lazar, Jeremy Howard, & Arvind Narayanan: “The history of technology to date suggests that the greatest risks come not from technology itself, but from the people who control the technology using it to accumulate power and wealth. The AI industry leaders who have signed this statement are precisely the people best positioned to do just that.”
Ryan Calo: “If AI threatens humanity, it’s by accelerating existing trends of wealth and income inequality, lack of integrity in information, & exploiting natural resources. Think GreatDismal’s jackpot, not Skynet. I agree with the simple statement to this degree.”
François Chollet: “To be clear, at this time and for the foreseeable future, there does not exist any AI model or technique that could represent an extinction risk for humanity. Not even in nascent form, and not even if you extrapolate capabilities far into the future via scaling laws.”
Margaret Mitchell: “Look guys. If you're worried abt an extinction-level event from AI, one way to create that event is to enable LLMs to take actions in the real world. One way to enable LLMs to take actions is to give chatbots the ability to incorporate ‘plugins’.”
Here’s a question: Can we really compare AGI with atomic bombs? Physicists knew the laws of physics and the theories based on robust frameworks developed through centuries and that’s how they arrived at that conclusion; what do AI researchers know about AGI exactly or about the dangers it poses? I believe the true motivation behind this specific statement is that these people truly believe they’ve become the new Project Manhattan scientists who are about to create something deadly for humanity—and in doing so feel the moral responsibility to warn the world. But, is this a question of self-responsibility or of self-importance?