God Is Dead So They Are Building a New One
Have you ever thought about what "superintelligence" truly means for the people who are building it?
My final advice here is: don’t think of us vs them; us, the humans, vs these future super robots. Think of yourself and of humanity in general as a small stepping stone — not the last one — on the path of the universe towards more and more unfathomable complexity. Be content with that little role in the grand scheme of things.
That was Jurgen Schmidhuber, one of the fathers of modern deep learning, in a TEDx talk in 2012. That’s right, 2012. The year that the deep learning revolution began for most of us, Schmidhuber was already working on generative AI and predicting the “doomers vs boosters” battle.
As I assume is evident from the above excerpt, he’s a techno-optimist and an accelerationist (people who think the AI species they are so eagerly building is the next step in evolution).
Here’s Richard Sutton, pioneer of reinforcement learning and author of the Bitter Lesson (dearly loved among techno-optimists), three months ago in a talk about the future of AI and how it is, quite literally, also the future of humanity:
In the ascent of humanity, succession to AI is inevitable. This would be the next great step: technologically advanced humans and then, AIs would be our successors. Inevitably, eventually, they would become more important in almost all ways than ordinary humans … this need not be viewed as bad in any way.
Here’s Sam Altman the same year he founded OpenAI, in 2015, about the superintelligence they’re now trying to create:
There is an obvious upside case to SMI [superhuman machine intelligence] — it could solve a lot of the serious problems facing humanity — but in my opinion it is not the default case. The other big upside case is that machine intelligence could help us figure out how to upload ourselves, and we could live forever in computers. Or maybe in some way, we can make SMI be a descendent of humanity.
Schmidhuber, Sutton, Altman — all sound scarily close to what many other AI pioneers and leaders think (not all of them, though). There’s no reason to consider humans and humanity the final frontier of life — or, as I interpret it from their wording, there’s no reason to want it.
Whether we manage to make the transition safely is up for discussion but pretty much everyone who has thought this through agrees that no physical or biological law says humans embody the final stage of intelligence or that the universe’s complexity can’t increase past us.
As part of Team Human, that’s… illuminating
Just to be clear, I am team human. I can’t be content with “that little role in the grand scheme of things” that, according to Schmidhuber, the universe has prepared for us. Or with Sutton’s view that this is “a thoroughly good event.” Or with making “SMI be a descendent of humanity,” like Altman thinks (and presumably in line with what he’s trying to do at OpenAI).
They feel eerily optimistic about being eventually replaced. Their stance could easily be translated into: “Our time at the very top of the living hierarchy is coming to an end, and it’s for the better.” Somehow, from his words above, Schmidhuber concluded recently that, “In the end, all will be good.”
Two of the most prominent AI scientists of our time, Yann LeCun and Geoffrey Hinton, don’t share that blinding glee but they do share the belief that a superintelligence is coming. LeCun is a firm believer — to most an overconfident one — that, paraphrasing, “we will control the superintelligence and it will serve us.” Hinton contrasts with him and with the others saying that “we should be afraid of the possibility that a rogue superintelligence could take over our world.”
I’m not convinced — I’m not that optimistic, I’m not that confident, and I’m not that fearful. But, amid the dense, almost tangible uncertainty that floats in the air these days, I see with renewed clarity — as in “I’ve internalized and comprehended it to the point of a soul-shocking realization” — something they all have in common that I hadn’t realized before.
I see what Schmidhuber perceives as the incoming bearer of unfathomable complexity; what Sutton seeks as our ascended successor; what Altman thinks will host our fragile digitalized minds; what LeCun believes we can keep controlled under chains; what Hinton is starting to be profoundly terrified of.
Keep reading with a 7-day free trial
Subscribe to The Algorithmic Bridge to keep reading this post and get 7 days of free access to the full post archives.