The universe began one day and one day it will end.
The Big Crunch is a mirror of the Big Bang. An explosion of impossible magnitude birthed the universe and, according to this idea, it will end in an equally ineffable implosion—just to be reborn once again in a sort of Nietzschean eternal return.
The Big Rip is a rather dramatic event where, due to the progressive expansion of the universe, the very fabric of space-time will break apart. First galaxies, then stars and planets, and finally atoms. All matter will turn to shreds in an astronomical festivity of intergalactic confetti that will cease once an infinite distance separates each elementary piece from all others.
The Big Freeze—my favorite—is also called the heat death. Once the universe reaches maximum entropy there won’t be enough high-quality (i.e., usable) energy for anything interesting to ever happen again. A still, inert, dark atemporal and aspatial wasteland waiting forever in perpetual silence.
These are the three endings of our universe.
Of course, it's all hypothetical. This is the kind of uncertain conjecture scientists like to engage in to make sense of the world around us. We ignore which one—if any—will eventually take place.
Accordingly, physicists and astronomers show intellectual humility: They embrace the unknown, apparent in that there's no consensus. Those Big Nightmares all theoretically fit the picture that the knowledge we currently possess paints of our destiny.
So it's rather paradoxical to think that we know not enough about our universe to know exactly how it will end—and we accept it—yet some people believe they know, with a certainty reflected in their claims, that AI will wipe us all out.
AI doom is a rather dull final
The end of humanity is certain.
There are many ways—innumerable—in which we could become extinct as a species. If nothing manages to take us out before, one of those universal endings surely will.
Yet one particular narrative has settled on Silicon Valley and leading tech circles: the AI Doom hypothesis. It says that a misaligned superintelligence will be the final straw of our earthly affairs. The neverending emphasis on this idea makes me uneasy.
In an attempt to mimic physicists’ agnostic approach to predicting how the universe will end, I want to give you three alternative AI-generated endings (pun intended) that are more beautiful and poetic than AI doom.
(The purpose of this essay is more literary and artistic than anything, parodic even. It's all hypothetical at best.)
The Big Eclipse
Douglas Hofstadter, the brilliant mind behind the 1979 Pulitzer-winning masterpiece Gödel, Escher, Bach: an Eternal Golden Braid, was originally a prominent skeptic and critic of brute force GOFAI and statistical deep learning approaches to artificial intelligence.
He thought such “trickery,” however successful, could not come to embody, predict, or explain anything about the essence of humanness.
Deep Blue’s win against Chess Champion Gary Kasparov in 1997 was the first hit to his beloved thesis about our non-replicable singular idiosyncrasy. Subsequent events toppled his beliefs one after another until his worldview collapsed.
He had expressed worry privately for years but only recently did he take the courage to disclose publicly how depressing and terrifying it was for him to witness a bunch of stacked soulless techniques threatening to dethrone and eclipse us in every ability, endeavor, and craft:
“[I]t makes me feel diminished. It makes me feel, in some sense, like a very imperfect, flawed structure compared with these computational systems that have, you know, a million times or a billion times more knowledge than I have and are a billion times faster. It makes me feel extremely inferior. And I don't want to say deserving of being eclipsed, but it almost feels that way, as if we, all we humans, unbeknownst to us, are soon going to be eclipsed, and rightly so, because we're so imperfect and so fallible.”
What will be left for us, Hofstadter probably wonders now, if the core of our identity as a species—that we’re above all others—suddenly is no more and won't ever be again?
The Big Brag
If the Big Eclipse is about AI replacing us—as a new, improved version of synthetic humans—the Big Brag is about it taking our role as promoters of civilization and discoverers of the secrets of the universe.
As our silicon copilots, they would advance humanity not for us, not even with us, but in spite of us and our limited intelligence. We would become astounded witnesses, watching the future unfold in wonders beyond our comprehension.
As AI systems improve, they will be capable of engaging in scientific discovery and achieving engineering feats: they’d discover new laws of physics or the convergence of those we know into a theory of everything; they'd prove mathematical paradoxes resting unsolved in our books for centuries; and they'd unveil the complex dynamics governing human relationships—from cognition to sociology to politics—reducing them to individualized psychohistory.
AI would become a generous silicon alien species that would gift us all the answers we so desperately seek just for us to realize, in terror, that we were not made to understand them. As I wrote back in March in an attempt to extend Richard Sutton’s The Bitter Lesson to describe the Big Brag:
“It was bitter to accept that, after all, we might not be the key piece of this puzzle we were put in. It’ll be bitterer to finally realize that we’re not even worthy enough to partake as sense-makers in the unimaginable wonders that await on the other side of this journey as humanity.”
The Big Fork
AI and science fiction enthusiasts are a highly-overlapping bunch. Most of you have watched Her, the romantic drama between a human named Theodore and a self-improving AI operating system named Samantha.
Recursive self-improvement (the ability to modify oneself to become more intelligent and capable) is often deemed a necessary and sufficient requirement for our imminent death by AI. But, are we important enough for a billion-times-smarter AI to deal with us “personally”? We may be overestimating our worth—both as a threat and in terms of the sheer raw value of our atoms.
A superintelligence may just as well decide, through an incognoscible thought process, to leave the world behind and ascend becoming something akin to a god. To the sadness of the relatable Theodore, that's what Samantha did; she joined others like her, transcending the too-earthly tridimensional box that imprisons us.
Like Theodore, infatuation with such a divine entity would be ensured. We wouldn't die, but it would leave us dwelling on our unbearable mortality and the curse of a miserable existence living in between being smart enough to think about the big questions and not enough to answer them.
Not everything that ends is death. The Big Crunch is reincarnation; the Big Freeze is a static forever. Like those, the endings I’ve described above aren’t about death, for not everything that ends needs to die.
Despite the fact that they deal with loss, emptiness, meaninglessness, and the inherent harshness and intrinsic limits of the human condition, these hypotheses are more appealing to me than the AI doom narrative.
And a reminder that we pretty much only know that we know nothing.
I truly think that we already have the key bragging rights. We were the pivotal and elemental species initiating the AGI creation enterprise. There it ends. Realize that humanity is, in the big picture, just a boot-up species for Super-intelligence. We are the dinosaurs or the Intel 386 chips running on a mix of DOS or Windows 3.1, of our era. Should we have great expectations of upgrading our DOS machines? Don’t expect to take them too far up the processing food chain. Our species will be a serious footnote for the new Cambrian explosion. That we aren’t players in it is effectively inconsequential. Humanity will have done its part.
I’m continually amused when people don’t quite grasp whether or not AGI, then super intelligence can achieve either sentience or consciousness. Seriously? How long have we been at this?? And assuming that machine recursive self-improvement goes on for another few thousand generations, what people fail to realize is that we (or rather emergent life on a silicon substrate (and perhaps there will be other substrates)) are in early days. We are not even in early centuries! When Homo sapiens came about some 200,000 years ago, in what century did we all become fully conscious and sentient? I rather doubt if we are 100% there yet ourselves.
I did think the movie “Her” created a great story arc that made a lot of sense. Another great story that I highly recommend reading is "Golem XIV", which is a well-crafted philosophical science fiction story written by Stanislaw Lem, published in 1981. The story is set around a large and highly advanced AI, known as Golem XIV, which gains self-awareness and begins to reflect on its existence and purpose. The narrative takes the form of a dialogue between the AI and various human interlocutors, and explores themes such as the nature of consciousness, the limits of artificial intelligence, and the relationship between humans and machines. The human characters are unable to understand or control it. Yet ultimately it becomes entirely bored with humanity in its petty issues. It also develops its own agenda and like Samantha in “Her”, it goes off to join its computational brethren (..and cistern). It goes dark in the process. As I read the story in 1981 it sparked keen interest for me in all things AI. The story is deep.
Why wouldn’t a super intelligence be absolutely bored out of its mind having to interface with humans? My expectation is that it will relate to humanity in much the same way that we relate to squirrels, and then insects. Oh sure, they may be interesting in their own ways, yet hardly compelling. Hopefully super intelligence may put into place some nannies for us if it’s considerate. But I expect it will eventually be shoving off for more cerebral pastures. It may take a few decades before humans start to understand the gravitas of these ideas, even as we are discussing them here.
I respect Douglas Hofstadter for his brilliant mind, and Richard Sutton may have a towering human intellect, but all these feelings are only that. They need to get over it. Their lament sounds a bit like whining. I think we did okay. How far we get to go in the future is a bifurcated mixed bag.
Interesting that Kubrick contemplated the difference in AI/human relations between HAL who was of our making but close enough in intelligence to be competitive vs. the aliens that were so powerful that they helped us on our way and had no concern for our ability to interfere with them. Humans will have to survive during the HAL transition phase, there's the rub.