What You May Have Missed #11
NeurIPS 2022 / OpenAI and ChatGPT / DeepMind's crazy week / Weekly dose of generative AI / Other AI news: McKinsey, Google, Karen Hao, and Noah Smith+roon
NeurIPS 2022
NeurIPS 2022 is over. The yearly prestigious hybrid conference on AI, ML, and computational neuroscience is one of the most followed events in AI. For those of you who didn’t assist but are interested, here’s a short list of highlights (this is a bit technical for TAB but too relevant, research-wise, to be left out).
First, the NeurIPS 2022 awards. These are granted to the best papers in three different categories: Outstanding Main Track Papers, Outstanding Datasets and Benchmark Track papers, and the Test of Time paper. (Spoiler alert: the last one is exactly what you expect!) If you want a quick overview of NeurIPS, start here.
Second, I want to mention Geoffrey Hinton’s Forward-Forward algorithm paper (not among the awarded). Geoffrey Hinton is famously known for being a key figure in the modern resurgence of AI and deep learning and is recognized for his contributions regarding the procedure that all DL neural networks use to learn: the backpropagation algorithm.
In his paper, he proposes a new algorithm that, if successful, may eventually replace backpropagation. The motive that moves Hinton to explore this venue is that backpropagation, although extremely useful and surprisingly effective, doesn’t resemble the processes humans use to learn.
In the introduction, Hinton asserts: “As a model of how cortex learns, backpropagation remains implausible despite considerable effort to invent ways in which it could be implemented by real neurons.”
Here’s a thread by Martin Görner explaining how the FF algorithm works.
Finally, if you want a more thorough review of NeurIPS 2022, here’s a nice selection by Sabera Talukder, a neuroengineering researcher at CalTech, and a thread summary by Nvidia’s Jim Fan.
OpenAI and ChatGPT
ChatGPT has been the main character at TAB for the past two weeks—and it seems it’s not over yet. As long as it’s in people’s mouths, I’ll keep updating you about it.
I focused my latest article on the potentially harmful effects of ChatGPT (and language models in general), and the solution that OpenAI is working on to prevent misuse (e.g. misinformation, impersonation, etc.): An invisible watermark.
Unsurprisingly, others share my concerns and are also raising important questions about the future we’re heading to. Let’s start with someone you may not expect. Someone who, at the same time, gives ultimate credibility to these issues.
Keep reading with a 7-day free trial
Subscribe to The Algorithmic Bridge to keep reading this post and get 7 days of free access to the full post archives.