What You May Have Missed #2
AI could cause a global catastrophe / Kids using AI to write essays / Clearview AI to redeem itself / AI artist copyrighted Midjourney's work / AI pioneers disagree on its future: Is scale enough?
I know, I know. I said WYMHM would be a twice-a-month column. The thing is, too many things are happening in AI right now to try to fit two-weeks worth of relevant news into a single blog post.
I want WYMHM to be a short (to not overwhelm you with info) curated (to give you what’s most important and/or useful) list of links. Sending it every 15 days would make it either too long or too lacking.
That’s why I’ve decided to make WYMHM a weekly column. Also, I’ve found that expanding a bit more on my commentary can be helpful. Everything else remains the same (except that I’ll have a lot more work!) But it’s worth it if you get a higher value from the newsletter.
I’ve also decided that from now on ALL Tuesday and Friday TAB articles will be freely available. Paid subscribers will have the additional benefit of having a broader view of what’s happening in AI through the weekly WYMHM column—which for some of you will be significantly more valuable than the other TAB articles.
As always, your feedback is super welcome, so reach out to me anytime in the comments or by email!
Now, without further ado, let’s get into this week’s WYMHM.
1 in 3 NLP researchers think AI could cause a global catastrophe this century (an “all-out nuclear war“)
What happened
On August 26 this year, a group of natural language processing (NLP) researchers conducted a survey to explore the beliefs of their fellow NLP researchers (those who are currently leading AI forward). They asked about the state of the field, ethics, language understanding, and AGI risks, among other topics. The question that made me include this news in WYMHM was this:
“AI decisions could cause nuclear-level catastrophe. It is plausible that decisions made by AI or machine learning systems could cause a catastrophe this century that is at least as bad as an all-out nuclear war.”
A third of respondents (36%) answered “agree” to that. The paper also states that some respondents said that “all-out nuclear war” was an extreme phrasing and they’d have agreed with something less strong—which implies that 1 in 3 is an underestimation.
Why it’s relevant
Think about this. 1 in 3 NLP researchers believes AI could cause a global catastrophe. We’re talking about some of the most knowledgeable people in AI, from a hands-on perspective (the survey includes people from academia and industry).
If so many AI researchers—although not the majority—think AI could cause such an outcome maybe they should think twice about the path they’re following. Are they focused on the right problems? Are they taking enough safety measures? Do we have, as a society, the resources or ability to avoid this existential risk (x-risk) if we pass a certain no-going-back threshold in AI progress?
Linguist Emily M. Bender has repeatedly argued that we shouldn’t take progress in AI (or any other field, for that matter) as a given—as a phenomenon of nature. It’s not something we witness happen. It’s something we produce. So we can slow it down.
Still, these so-called x-risks may not be the most pressing now. The same survey found that 73% of respondents think “AI could soon lead to revolutionary societal change.” The same arguments are valid here. Maybe the time has arrived to stop and reflect.
Read more
Keep reading with a 7-day free trial
Subscribe to The Algorithmic Bridge to keep reading this post and get 7 days of free access to the full post archives.