Richard Sutton’s Bitter Lesson is, simplifying, that in the long-term it’s always better to let computation take the burden of finding greater AI systems (through learning or search algorithms) instead of trying to find them ourselves by leveraging our knowledge.
In Sutton’s view, humans have contributed little to the best AI systems we’ve built: We’ve become background characters to this show of artificial phenomena that succeeds not because of us (sometimes even despite us and our sense of self-importance), rather than the intelligent architects of the future we thought we’d be. This Sutton asserted almost two years ago (March 2019). He wasn’t short of evidence and good arguments to defend his point, which, if true, is certainly bitter.
He didn’t claim humans are entirely irrelevant—of course, there’s outstanding ingenuity and deep expertise in building Deep Blue and AlphaGo (or ChatGPT and GPT-4, for that matter). He meant that once more computation is available, one unit of this mystical power is worth more to increase the performance of an AI system on a given task than one unit of human knowledge (whatever that means).
And, although we wouldn’t have possibly agreed that all problems were subjected to this reality back then (e.g., not every solution is a matter of learning), by extending his arguments to infinity we might find that nothing escapes it.
In this essay, I’ll accept Sutton’s conclusion as my premise. Humans aren’t disposable but yes, computation is king in AI. If so, what happens next?
It keeps getting bitter
GPT-4, which was released last week, fits perfectly Sutton’s arguments. I’m sure there’s tons of engineering prowess involved in the creation of GPT-4, but the truth is OpenAI has built it with 375 employees. Google has more than 150,000 and so far it’s been unable to compete toe to toe. Are OpenAI’s employees five hundred times smarter than Google’s?
We could enter a debate about OpenAI’s enviable knack to capture talent, or about Google’s inability to ship products, but there’s a simpler explanation that follows from Sutton’s lesson: With enough computing power, human knowledge isn’t as essential—the number of minds thinking about how to solve the problem of building better AI models is a much tiny factor in the final outcome.
GPT-4, conceived by such a small number of people, is undoubtedly better than Chinchilla or PaLM (the only sufficiently competent language models as to shily attempt at fighting for the crown). And we don’t know why. Courtesy of “CloseAI,” GPT-4’s parameter count, the data it was trained on, or even the amount of computing power used to do so are unknown. We only know there’s more. More and more layers of GPUs going brr.
And it doesn’t even matter. The exact size of their supercomputer is irrelevant. It’s big enough to outcompete the whole crew of highly intelligent, highly-capable engineers at Google AI. It works and OpenAI has won. The takeaway is that those thousands and thousands of NLP researchers working at Google and other tech companies are suddenly looking into the abyss.
I don’t think they’ll lose their jobs. That’s not their tragedy: it is that their value, measured by their ability and knowledge to solve some of the deepest mysteries of intelligence and language, has dropped to ~0 in an instant by 375 employees, and a huge chunk of computing power, transformed into the black box that’s GPT-4. Truly bitter.
And it will get bitterer
So let’s explore the ultimate consequences of what all this means. You shouldn’t take the rest of this article as much an argumentation to explain something that’s observable (Sutton based his points on available insight) than a speculative attempt to predict something that’s possible (if not probable). Bear with me.
Plato made us leave the cavern some two thousand years ago. And we’ve spent these precious centuries digging a new one. First, we thought we knew a lot. Then, we began to acknowledge our ignorance (some more than others) but we never denied our superiority to all other forms of life. We were the lords of this world.
Even after the reality shock of the bitter lesson, that is, after accepting that our role in the creation of the next generation of magnificent technology—that we hoped to be the result of years of stacking up knowledge about the universe’s hidden dynamics—won’t ever be as primary as it’s been, we’ve managed to keep our dignity. We may no longer be the stars of the movie, but we’re still the only ones here able to fill it with meaning. It’s through our unique awareness of what we’re helping create that it’s justifiable that we’re creating it at all. Computers may be the better horses now but we’re still the men riding them.
It may not be this way forever, though, and don’t mistake what I mean here: I’m not talking about AI becoming smarter than us (AGI, ASI, whatever). I’m not sure that’s possible. It’s not important. Because sooner than that (we won’t know how much), these things we’re building will grow so complex that not even our privileged minds will be able to make sense of them. It’s already happening.
Here’s the thing: no one—not even the creators—knows what GPT-4 is all about. All those memes and philosophical puzzles about Shoggoths, Waluigis, and masked simulators are desperate—and vain—attempts at trying to imbue coherence into something that is slowly escaping the grip of our understanding. Soon, there will only be mysteries. And we won’t stop. Because computers, our metaphorical horses (that by then will do even a higher percentage of the total work in taking us forward) will keep running toward the unknown long after we won’t be able to recognize our fate anymore.
We were the masters. The rulers. We’re now (still) the ideators albeit not the main constructors. And soon, we’ll be just spectators, mere observers of a world neither built by us nor understood by us. A world that unfolds before our eyes—too fast to keep up, and too complex to make sense. The irrelevancy that we so deeply fear—not just as individuals, but as The Chosen Species—is lurking in the impeding future that we’re so willingly approaching.
It was bitter to accept that, after all, we might not be the key piece of this puzzle we were put in. It’ll be bitterer to finally realize that we’re not even worthy enough to partake as sense-makers in the unimaginable wonders that await on the other side of this journey as humanity.
Thanks Alberto or picking up on this. The story actually has an interesting additional angle to it: the human condition in pre-modern times was subject to the will of the gods, which could only be assumed through oracle and signs, never directly known. Enlightenment taught us to see for ourselves, and since then we have taken it for granted that we could know the world, and act according to knowledge. We now construct entities that will become more intelligent than us, that we cannot control (cf. Alfonseca et al. 2021 https://dx.doi.org/10.1613/jair.1.12202), and – as you point out – that we cannot even properly know. It is striking to realize that in this sense we will return to the pre-modern state. Modernity was but a phase.
I liked this:
"And soon, we’ll be just spectators, mere observers of a world neither built by us nor understood by us. A world that unfolds before our eyes—too fast to keep up, and too complex to make sense. The irrelevancy that we so deeply fear—not just as individuals, but as The Chosen Species—is lurking in the impeding future that we’re so willingly approaching."
Artful, and insightful.
One irony I see in this future you are considering is that on one hand we are deeply confident as we fuel this future, and on the other hand we seem deeply defeatist. As you've written, some in the AI industry have expressed such concerns, but as I understand what you've taught us, they also seem to feel we have no choice but to go forward. And so they keep pushing forward toward what concerns them with great confidence and ability.
I would be interested in being educated about those in and around the industry who are arguing we should just stop. Who are they, what are they saying, how influential are they etc.
I'm a boomer geezer, and much of my perspective arises out of our experience with nuclear weapons. My generation didn't invent nukes, but we funded their mass production and improvements etc. And now we have no idea what to do next. So, as we boomers depart the scene, we're dumping our FUBAR in the laps of our children and grandchildren.
I see current generations basically repeating this mistake with AI and genetic engineering. You'll build it, and then become prisoners of it, and then pass the prison on to your kids.