What You May Have Missed #23
Twitter algo / AI regulation / Open letter aftermath / E. Yudkowsky / Deepfakes / Bard / GPT-4 / R&D and products / Worthwhile magazine essays / Miscellanea and curiosities
This week I'm trying a slightly different format/schedule for the WYMHM column. I did a 10K subs AMA last week so I've accumulated news for two weeks, i.e., an eternity in AI time.
I have so many links that doing my usual analysis would yield a 5K-word blog post (that nobody would read). Instead, I'll summarize each link in a few short sentences. Let's see how this turns out. Also, I’ve been intermittently testing sending the WYMHM column early on Mondays instead of Sundays and will continue to do so for a few weeks.
Finally, I’m thinking of renaming the column to something more attractive. Suggestions welcome! Let me know what you think about the changes in the comments.
Index
The Twitter algorithm is open source
Regulation is becoming a reality for AI
The aftermath of the FLI open letter
Eliezer Yudkowsky and the longtermist view of AI
Deepfakes: AI for deception
Google’s Bard: Underwhelming
On the potential of GPT-4
Research, developments, and products
Worthwhile magazine essays
Miscellanea and curiosities
The Twitter algorithm is open source
Twitter has open-sourced its algorithm. They've released the recommender system training pipeline but not the weights that would reveal the secrets behind Twitter's feeds' behavior. Here's a nice thread explaining how the RecSys works.
Regulation is becoming a reality for AI
FTC Attorney Michael Atleson published a blog post on deception and fraud, “Chatbots, deepfakes, and voice clones: AI deception for sale’” lauded by Emily M. Bender.
Abeba Birhane had a meeting on the “Opportunities and challenges of LLMs” with UN regulators and OpenAI's CEO Sam Altman among others. Birhane’s highlight: “when asked what can be done to the mass generation and spread of misinformation enabled by LLMs, Altman said ‘the only way to tackle problems that arise from AI is with AI.’” Still, “limitations and critical question[s] were covered.”
Tate Ryan-Mosley writes for MIT Technology Review that “the internet is about to get a lot safer”: Europe's “Digital Services Act (DSA) and Digital Markets Act (DMA) … [set] a global gold standard for tech regulation when it comes to user-generated content.”
Sasha Costanza-Chock reminds “tech bros” that there's active regulation that affects AI systems: The Algorithmic Accountability Act 2022 and the NIST AI 100-1 Artificial Intelligence Risk Management Framework (AI RMF 1.0).
Italy bans ChatGPT and will “investigate OpenAI” as reported by Politico. Sam Altman’s response doesn't allude to the real problem: OpenAI breaking Italians’ data privacy protection laws.
UK's government has recently published a policy paper entitled “A pro-innovation approach to AI regulation.” Here's a highlights thread by policy advisor Jess Whittlestone.
Jack Clark, Anthropic co-founder (and ex-OpenAI) argues that GPT-4 should be viewed as a political artifact: “AI systems are likely going to have societal influences far greater than those of earlier tech 'platforms' (social media, smartphones, etc).” This also applies, of course, to Anthropic's Claude, Google's Bard, and Microsoft's Bing among others.
The aftermath of the FLI open letter
I wrote an essay on why I think it won't work. TL;DR: OpenAI, Google, and Microsoft’s actions reveal their implicit motto is “keep going until we fuck it up.” Keep going until they step over a stone that makes us all fall over. Then and only then they’ll rethink our path.
Gary Marcus published earlier today an essay arguing he's scared not of AI but about people's negative reaction to the letter and how that signals our inability to come to an agreement about both short-term and long-term risks. Can’t agree more (my essay is about AGI but applies pretty much equally to any other topic in the field: there aren’t clear figures of authority, policymakers must act fast but, in many cases, they’re not ready, and there are many unknowns we don’t know how to handle).
I’ve read DAIR's response to the letter and agree with all the points it raises. I also agree with Marcus that the statement reads as if “their (very legitimate and important) cause—exploitative practices—outweighs all others; not ‘a focus’ but ‘the focus.’” Here’s Emily M. Bender’s indirect response to Marcus’ blog post. It gets harder and harder to discern what’s the best path forward as you dig into the details. Bender argues that we can’t assume AI safety and AI ethics are independent—they’re at odds by definition.
It all comes down to how each of us perceives what should be a priority. Sometimes it’s not clear at all what’s the ethical thing to do so an agreement is extremely hard. The debate has just started, so expect to see more conversation around these issues.