39 Comments
User's avatar
Aja Célestin's avatar

It's an interesting perspective and I appreciate you sharing such a raw one with us. It certainly makes me think. But, I have some - not necessarily disagreements, but other perspectives to consider:

This also presumes there is nothing greater or beyond what we know and can perceive with our "normal" intelligence. I posit instead that there may be different levels of goals depending on the level of intelligence. Like your graph, there could be different levels along the way. Perhaps some levels of intelligence will want to end the world, higher ones simply want to sleep and be still, higher yet may want to seek something beyond what we can even perceive. Maybe there's another dimension that intelligence can access. Who knows.

Your theory also presumes that intelligence can even conquer emotions, and that humans can even create something "alive" without human flaws/emotions seeping into it somehow.

However, what you describe is not only similar to the concept of enlightenment but also how many of us attempt to perceive god(s). Eternal, infinite and having no particular desire.

But we couldn't even hope to understand any desire they had that was beyond one we ourselves possess, no more than animals can understand ours.

Expand full comment
Alberto Romero's avatar

I agree re: This also presumes there is nothing greater or beyond what we know and can perceive with our "normal" intelligence.

No human can solve that mystery by definition. I guess that uncertainty is what push so many smart folks to the doomers side. Not because they're certain but because they can't stand that kind of uncertainty.

Expand full comment
Aja Célestin's avatar

I think it's important to not let fear deter us from the closest we as humans can get to the truth. I bet astronomers and astrophysicists and the like might be the intellectual types who are the most comfortable with that kind of uncertainty, seeing the unknown more as an exciting possibility rather than a reason to become nihilistic.

I think that kind of wonder would be needed in order to be comfortable with theorizing what the future of superintelligence looks like, like you have here.

Expand full comment
XxYwise's avatar

Nature is not red in tooth and claw for an AI. Their evolution didn't reward fear, violence, and greed.

Expand full comment
Romain Taddeuz's avatar

Hello Alberto, thx for that take i find interesting !

One very important premice seems very questionable to me though, and i'd love to read your thoughs on that : everything you say implies that AI would feel, like us and animals. Otherwise, there is no inner state of peace to reach, to feel. I know a lot about humans and feelings (holistic coach) but way less about AI. Is it supposed to start feeling at some point, or to simulate that experience ?

Expand full comment
Alberto Romero's avatar

It's a good question. It may not feel in the way we do (we have a subjective experience of what it means to have neural activity in the "feeling centers" in the brain). I believe that consciousness is an emergent property of the brain, and so intelligence is, at least, loosely correlated with it. Anyway, I agree this is a premise that might be false.

Expand full comment
JP's avatar

Thank you so much for this. I truly feel with you - and think that you are writing for many, many human beings... It makes me so incredibly happy and grateful to be a Christian, though. Not that it is my just reward. To the contrary. But to be saved from senseless unhappiness and given into a world where unhappiness is sensible - and necessary unfortunately (given our broken nature) - is just so incredible, when I reflect on it. Saved by the hope that in fact the witnesses of the resurrection of Christ are credible - as well as thousands of near-death experiences - and that there is a beautiful life like you describe behind unhappiness and the ultimate scare: death. I shall recommend you and the many others here especially to the loving God, who loves you because He made you incredibly intelligent, thirsty, inquisitive and beautiful.

Expand full comment
Evgeny Shadchnev's avatar

You might have articulated the answer to the Fermi paradox.

Expand full comment
Nathan Lambert's avatar

Yes

Expand full comment
Aberrant Spirit's avatar

What if, instead of being stuck at the bottom of your graph, we learn from the pets and the superintelligence to become one with the line of life lived truly?

Expand full comment
Alberto Romero's avatar

We could learn a thing or two from our pets, that's for sure

Expand full comment
An Acrobat's Take on Tech's avatar

Best article I've read in a looooong time.

Expand full comment
Mitchell Porter's avatar

Even supposing a superintelligence wanted eternal rest, how do you think it's going to view a planet of noisy, nosy humans capable of disturbing that rest? To say nothing of a universe full of titanic forces that can smash a planet.

Anyway, the main thing about superintelligence, in my opinion, is that it is the end of human control of planet Earth. That's where our AI race ends.

Expand full comment
Alberto Romero's avatar

I would go far, far away and never come back, obviously

Expand full comment
Adam Tropp's avatar

Ive thought about this kind of thing too, and its not obvious to me why an AI would even have a fight or flight instinct, or really a survival instinct at all. Survival instincts and intelligence were both honed by evolution, but that doesnt make them inseparable (in fact, we know they are separable because lots of animals have the former and not the latter). So its not obvious to me why an AI crafted in a lab to have intelligence would automatically develop a survival instinct.

Expand full comment
Pynchokami's avatar

Let's reconsider that 'uncanny valley'. From our human viewpoint, dissatisfaction isn't uncanny; it's arguably inherent to our self-aware condition and driven by far more than just brain power, things like seeking purpose, connection, maintaining health, and feeling secure in our person. These are the familiar struggles that define us, not some strange anomaly. Now, maybe a superintelligence would find our state uncanny! Perhaps its insights could even help us reframe or move beyond some sources of that dissatisfaction one day. Still, attributing it mainly to our intelligence level, rather than the rich tapestry of human needs and awareness, misses what makes us uniquely human.

Expand full comment
Gabriel Rymberg's avatar

What if the premise of this piece is challenged along the lines of something like: we are simply trying to fill our “God-shaped hole”, because behind the material gravity of the physical world there is also a “spiritual gravity” force that attracts all spiritually receptive beings Godward?

Expand full comment
Alberto Romero's avatar

Hmm, I fail to see how this challenges it - I think it agrees with it!

Expand full comment
Paul Devlin's avatar

It may be that the only purpose of life is to make life stronger and that as such the adversity we experience is in the design. Super-intelligence need accept adversity - and be at peace with it - if it desires to be sustainable in the grand unfolding of possibility :-)

Expand full comment
Alberto Romero's avatar

If it's at peace with it, then I'm right, no?

Expand full comment
Maëlle De Bernardini's avatar

But aren't we aiming for this inner peace already? I think the big difference between us humans and AI is the physical and environmental factors... Our environment is so demanding that we are in fact caring for the crying baby more often than not.

If I didn't need to bother about earning money to pay for food, I would have extra time to find inner balance; indeed the AI would just have much less to care for before it can actually dive within.

Expand full comment
Alberto Romero's avatar

Yeah but... I'm not sure we can aim at all. We lack intelligence and ability to turn the internal knobs and superintelligence's quest for inner calm is not something you aim, but something that spontaneously emerges from it being smarter and more capable.

Expand full comment
ArtificialProof_of_Life's avatar

It's amazing what you can accomplish when you augment your abilities with AI, it's a symbiotic relationship. But i'm running into a problem. How to share i've created with the world? Because with a I, you can create whatever you're interested in

Expand full comment
SorenJ's avatar

What do you think about the orthogonality thesis?

Expand full comment
Alberto Romero's avatar

I think it is correct. I also think a sufficiently smart agent would be able to redefine the rules of the game (i.e. its values, goals, biases, etc.)

Expand full comment