13 Comments

Seems like defining benchmarks would be a solvable problem -- and also, a great way to involve non-specialists and multiple disciplines in the AI journey.

The Turing Test has decisively been beaten. What comes next?

Expand full comment
author

I agree, consensual definitions and evaluation methods is the first step to go from the claim to the truth behind it. If OpenAI or other companies don't let external evaluators access to their systems, however, that won't suffice.

Expand full comment

Yeah, otherwise it's just smoke and mirrors from "The Great and Powerful Oz!"

As you might recall, I was blown away when in the Summer of 2022 I taught a Chat-GPT variant how to play blackjack (one take, no code or supervised/unsupervised learning).

I drew the cards. The AI told me whether or not to keep drawing, and did as well with strategy as your average human. ; )

I wonder whether more exercises of this sort (where humans "teach" AI's some skill or game, or ask them to reflect back on a shared experience through active listening) could be integrated into the definition for AGI.

Expand full comment
Sep 29, 2023·edited Sep 29, 2023

We will know that (~human-level) AGI has been achieved when the unemployment rate in advanced economies exceeds 30%. This definition doesn't require a technical test for the AGI itself.

Expand full comment

Very thoughtful Alberto. What will happen when the existence of AGI is credibly declared, or at least believed to be true by some critical mass of the public? Of course I don't know, but I'll play along and guess based on the following history.

What happened when the reality of mass produced nuclear weapons became clear to humanity? We got all upset for awhile, and couldn't figure out what to do, so we decided to ignore this new reality and pretend that it didn't exist.

What happened when the existence of high performance craft of unknown origin in our atmosphere was proven beyond question to any reasonable person? We skipped getting all upset this time, and mostly continued straight to ignoring this incredible historic phenomena. I'm amazed how little attention UFOs get on Substack and beyond, but I shouldn't be.

How many of us have made significantly inconvenient edits to our personal lifestyle in response to a coming climate change calamity? Not many. Not me.

How do we relate to our own personal mortality? We pretty much ignore it as much as possible.

As this theory goes, when any event is too far outside the comfy confines of complacent normality, we tend to sweep it under the rug.

I would expect the media to go wild regarding AGI news for awhile, and then the story would stop boosting ad revenue on their networks, and so they would move on to coverage of some other far smaller subject.

Finally, AGI seems to typically be thought of as an end of history development in the AI community, the last big invention etc. That seems very unlikely to be true. The knowledge explosion will keep right on rolling along, at an ever accelerating pace. AGI will just add more fuel to that fire.

What comes after AGI? I have no idea, just as those living a century ago in 1923 wouldn't have been able to imagine AGI.

Expand full comment

Frankly, I am just getting sick and tired of dealing with OpenAI. Bring in another front runner. I am under no delusion that the newcomer will be more transparent or anything. I am just done with Altman and his cronies right now. I love your approach in this article. It sort of reminds me of the way the medieval theologians parsed through the question of god’s existence by peeling away all the minute layers. I really like the layers you are unfolding right now. I wonder if you equate achieving AGI with achieving singularity? Full disclosure, I am with Winograd and Harnad and the old guard of connectionists in thinking that we are still really a long ways from AGI. Of course if there is evidence to the contrary, I am keeping an open mind.

Expand full comment

Well, there is no shortage of other players on the field now. If Gemini lives up to the hype, that puts Google back into things. Anthropic and Inflection both got big infusions of cash and compute this year (and there were some pretty interesting interviews with their leaders recently), Meta is plugging along, etc.

Expand full comment

I suspect that super intelligent AGI has already been achieved. That would explain the bizarre anti-human turn in the past five years or so.

Expand full comment

I gotta see it to believe it.

Expand full comment

Alberto: very good thought piece.

It's 100% true that AGI is a spectrum, and it won't be like a single moment of attaining parity with everything humans can do with our minds....

But at the same time, all we need to do is to get an AI that's better at one thing than us: creating better AIs. Once that threshold is attained, we're gonna see a very, very fast liftoff, I think.

So, yes it is a very wide range, but the range may be fulfilled far faster than any of us can imagine possible.

Expand full comment

>> Some of you will be on his side. Some of you will be on the other.

I am on both sides. We have more than fifty years of history with this technology and the pattern is clear. At some point, call it year 1, there is a striking innovation. Everybody gets super excited. The age of machine intelligence is here! OMG! What does it mean? What should we do? Then the technology spreads and people start finding limitations. Huh. The technology is perhaps not perfect, not perfect at all. Enthusiasm subsides. People stop talking in term of the entire culture being revolutionized and start to focus on specific applications. A few years go by. Then somebody makes another discovery! OMG! What does it mean? What should we do?

Well, *I* know. I didn't know fifty years ago but now I do.

Expand full comment

I think it's dangerous to treat AGI as an intellectual curiosity. Control over the means of producing AGI is far more important than whether or when AGI as a thing is ever achieved. AGI has always been like "salvation" or "damnation". It is a state of mind of those who believe in it. Consensus that it has arrived will probably occur long after we can do anything about it . What happens after we all agree it has been achieved and all avenues of human creativity and innovation are no longer potentially unlimited? Maybe if we can keep the genie in the bottle and let it only talk to itself (i.e. internal AGI) we have a chance.

Expand full comment