5 Comments

Solving protein folding is about as unambiguously a scientific contribution as you can get. Any implications come to mind about the spread of that technology?

Expand full comment

Well, applications in the sense of things the general public can use, I don't think so.

However, DeepMind open-sourced everything and so universities and researchers all around the world are using the discoveries of AlphaFold to further understand the behavior of proteins.

I wrote about BLOOM a few weeks ago and said it was "the most important AI model of the decade." If I had to think of another contender for that title, it'd probably be AlphaFold.

Expand full comment

Talking about AlphaFold, DeepMind just published a blog post, "AlphaFold reveals the structure of the protein universe" (link: https://www.deepmind.com/blog/alphafold-reveals-the-structure-of-the-protein-universe)

Expand full comment

Is "Next Few Years" to restrictive? Let me rephrase. What is your guess as to the first general application of AGI, no matter when that might be? I am interested in how non-obvious a question this is. The argument that AGI technology is going to "take over" is an indirect piece of evidence as to how tricky the question is.

Expand full comment

I think debates about AGI should start by defining the term. That's not what happens in the majority of cases.

For instance, people like Yann LeCun argue that AGI is a meaningless concept because we can't define "general" in this sense. Many people use AGI as a proxy for human-level (not necessarily human-like!) intelligence, but that's not what general means. Human-level intelligence isn't in any way general.

People who view AGI as a proxy for human-level, get attached to the concept because it still makes us the center of the debate. AGI is a relevant milestone because it's the point in the intelligence spectrum where we lie. I don't think comparing how intelligent humans are with AI is interesting except when we're doing it to take inspiration from the brain.

Also, and this is as important, AI already surpasses us in many aspects we consider part of our intelligence, like computing speed or memory capacity. If AI eventually surpasses us in other hallmarks of human intelligence, wouldn't it automatically be more than AGI, if we defined the term as human-level? Unless we build AI exactly like a human, it won't ever have human-level intelligence.

Other people understand AGI as a literal "general" intelligence. That is, an AI that can do well at all and every task possible. The question now is, does that make sense? Is it possible to create something extremely good at doing math and, at the same time, extremely good at playing football, and, at the same time, extremely good at understanding the inner emotional states of people? Why would we want that?

Another problem arises here: We haven't yet defined intelligence. Is playing football well a reasonable variable of some kind of intelligence? Would being great at football but horrible at math make me intelligent?

We start to see why talking about AGI right now gets tricky very fast. As you said, it's a "non-obvious" question. So, the best I can say about AGI applications or AGI taking over is that I don't really know.

Expand full comment