AGI Has Been Achieved Internally
Here's the AGI question no one asks, which is also the one we'll have to face first
“AGI has been achieved internally” has been the sentence of the week.
It was cryptic and called out as bullshit when “Jimmy Apples” tweeted it out on September 18th. It was shocking, for good or bad, when Sam Altman echoed it in a Reddit comment a week later. And it ended in a funny joke when he edited it to reveal he was just “memeing”.
The sentence was many things, but the truest of all is that it isn’t true. AGI hasn’t been achieved internally.
Or has it?
Well, we don’t really know. In part because we don’t have a consensual conceptualization for the term “AGI” (although true and important, this reason is boring). In part because Altman’s company has become so close that the name “OpenAI” is a meme itself. And in part because we don’t have the means to assess whether something is or not an AGI—Schrodinger’s cat has been replaced and we don’t know how to open the box.
Yet that sentence—one more time, for effect; “AGI has been achieved internally”—regardless of its epistemic status, is a powerful one.
At least coming out of the right mouth. If I say it, no one cares. Reasonably. That anon account, Jimmy Apples said it, and although some people gave it a >0 credibility for having previously predicted GPT-4’s release date and mentioned Gobi months before The Information reported on it, most simply didn’t believe him (?) and the word never transcended.
But if Altman says it—even if as a Reddit comment that was a clear trolling attempt on his part—the reactions don’t wait: from annoyance that the CEO of the leading AI lab would tease us with something so serious, to worry due to the possibility, even if tiny, of p(not a joke) materializing, to the ecstasy of being a first-hand witness of what would become in retrospect the most important event in human history.
Altman promptly edited his comment, probably surprised by how fast it spread all over AI Twitter (and the media), but imagine the implications if he decided to say that same sentence in a more serious venue, without any hints of retraction, like a blog post from the company or an exclusive interview with The New York Times.
Imagine harder because it’s happening very soon. Sooner than we think. It was a joke this time, but everything suggests the next one won’t be for the memes.
…AGI has been achieved…
We debate (perhaps too much) what will happen once AGI is achieved.
It’s an interesting question but, I believe, ill-posed. AGI is not a single point in time, and not just because “generality” is a spectrum—which is a valid enough reason—but because people differ in the definitions they give the thing. Or, to put it otherwise, what we describe with the acronym AGI is different things for different people.
“When AGI = true?” is relevant but its intrinsic diffuseness makes it not as critical as we tend to assume.
There’s another question about AGI that is as decisive for the world as it is ignored. We don’t spend a single minute considering the possible responses or their implications. Here it goes: What will happen once someone, generally accepted as an authority in the space, asserts with a confidence above some threshold—and being in possession of some kind of evidence—that AGI has been achieved?
Let’s spend together that minute we never spend on this question to disentangle what it means and understand why it’s, in my view, more important than the other one that we wonder about so passionately.
First off, who do you think is most likely to be that authority? My prediction, and I’m sure I wouldn’t earn a single penny if I turn out to be right, is Sam Altman.
It feels like he has been preparing for this moment all his life as if he were the fated star of a Hollywood blockbuster. But I don’t think that’s the reason. He’s also not an “AI expert” in the sense that Geoffrey Hinton, Yann LeCun, or even Demis Hassabis are—or in any other sense, really—so why him? Simply because he’s the visible head of the company best positioned to get AGI first (or, to keep things coherent, the one most likely to think it’s gotten AGI first).
Now, confidence must be high enough for Altman to publicly make such a statement (for OpenAI that can only be at p(AGI)~100%, but they’ll surely acknowledge that, regardless of their confidence, not everyone will agree; they’ll double-check everything). Also, Altman would only make the claim on behalf of OpenAI as a whole. They’d have to reach a collective agreement first (perhaps a simple majority would suffice, I don’t really know). They’re ~500 employees but given the requirements to be part of the crew, I wouldn’t put “lack of agreement” as a potential barrier.
In any case, this realization—that OpenAI must be very sure to claim AGI has been achieved—will give people the impression that the only possibility for the sentence to be false is that OpenAI made a mistake in their assessment.
So then there’s the question of evidence.
OpenAI will probably claim to have definitive proof that the statement is true (they might instead display epistemic humility with a “we think we have achieved AGI” approach, but I don’t really see how that would benefit them—people will fight them anyway). The company will surely publish, at the time of the announcement, a technical report like the ones we’ve been getting lately—results without the means to replicate the tests but sufficient to convince an important fraction of those listening.
The evidence will be strong enough to not be immediately refutable. The layperson won’t be able to disprove it under any circumstances (easy assumption), but it’s also likely that veteran AI researchers without access to the supposed AGI system won’t be able to either (this includes the vast majority of AI researchers and certainly 100% of those who have nothing at stake and no incentive to force that statement to be true).
So, after the announcement, we will eventually arrive at this question: Will OpenAI, under the singular implications that stem from AGI being real, allow for an external independent auditory of its system? Given the current closure around new systems and advancements—which I don’t see why wouldn’t include AGI—I don’t have much hope.
All in all, I think this is an inevitable scenario, it’s reasonable to assume that reactions will be strong from insiders but also competitors and governments, that there won’t be any kind of consensus at first, that it will be very hard for any external judge to assess the validity of the claim, and, without giving a definite interval timeline, that this will happen much sooner than forecasters think AGI will be achieved.
What will happen when the rest of the world can’t check if it’s true? What will happen if, in not being able to make the verification, the majority of the world rejects the assertion? What will happen if the majority of the world accepts the assertion? What will the general public and those with power to enforce it onto the world (i.e., governments) believe? What will they do?
It’s clear under this framing that the AGI question is not exclusively a technical one, no matter how much AGI researchers try to make it so. It’s also political (who should be in control of the supposed AGI?) social (who should decide if we want it or not out in the world?) economical (how should we distribute the hypothetical wealth it would create?) anthropological (what does it mean to be human if AGI exists, or if we believe it does?) and philosophical (how can we know if AGI has been achieved or not at any given point in time?)
That last question, the focus of this essay, is the one we’ll have to face first, likely in a similar manner—but at a much larger scale—than the Blake-Lemoine-says-LaMDA-is-conscious debate. The other questions will eventually come, too.
For those of you who care, in an attempt to remain epistemologically humble, I think we should redefine our AGI timelines not for when we think AGI will be achieved, but for when we think it will be first claimed that it has been achieved by a sufficiently influential person, in possession of strong enough evidence.
There’s another possibility, though. That a world where AGI is real displays uncoverable and unmistakable signs or hints of that truth that a world where AGI isn’t yet real couldn’t. Two reasons why I believe this doesn’t work. First, we’d only be sure if those signs exist at all if we could previously assess the existence of the AGI, which impossibility is the reason I’m writing this at all. Second, I don’t think there’s any reason to believe the first version of AGI would cause any more effects on the world than GPT-4, unreleased, did.
The real effect will be the consequences of our beliefs.
It’s then when we will start to see changes in the world: it’s our beliefs, not the objective truths that ideally back them up, that influence our behavior. For any measure of impact on the world, whether AGI is real is arguably less important than whether people believe it is and those two things aren’t always as correlated as we’d like.
Any day now Sam Altman, this time implicitly prefacing it with a #nojoke hashtag, will say AGI has been achieved. And just like Oppenheimer did when the first nuclear bomb test proved successful—I bet he likes this analogy—he will inaugurate a new age for humanity. But unlike Oppenheimer, he will face harsh opposition coming, reasonably, from critical thinkers, from those who reject blind trust as a means to form their model of the world, and also from skeptics, deniers, and worriers. Some of you will be on his side. Some of you will be on the other.
What happens that day will be engraved in our memory. What happens next is anyone’s guess.
Seems like defining benchmarks would be a solvable problem -- and also, a great way to involve non-specialists and multiple disciplines in the AI journey.
The Turing Test has decisively been beaten. What comes next?
We will know that (~human-level) AGI has been achieved when the unemployment rate in advanced economies exceeds 30%. This definition doesn't require a technical test for the AGI itself.