43 Comments
User's avatar
Aurora Célestin's avatar

It's an interesting perspective and I appreciate you sharing such a raw one with us. It certainly makes me think. But, I have some - not necessarily disagreements, but other perspectives to consider:

This also presumes there is nothing greater or beyond what we know and can perceive with our "normal" intelligence. I posit instead that there may be different levels of goals depending on the level of intelligence. Like your graph, there could be different levels along the way. Perhaps some levels of intelligence will want to end the world, higher ones simply want to sleep and be still, higher yet may want to seek something beyond what we can even perceive. Maybe there's another dimension that intelligence can access. Who knows.

Your theory also presumes that intelligence can even conquer emotions, and that humans can even create something "alive" without human flaws/emotions seeping into it somehow.

However, what you describe is not only similar to the concept of enlightenment but also how many of us attempt to perceive god(s). Eternal, infinite and having no particular desire.

But we couldn't even hope to understand any desire they had that was beyond one we ourselves possess, no more than animals can understand ours.

Alberto Romero's avatar

I agree re: This also presumes there is nothing greater or beyond what we know and can perceive with our "normal" intelligence.

No human can solve that mystery by definition. I guess that uncertainty is what push so many smart folks to the doomers side. Not because they're certain but because they can't stand that kind of uncertainty.

Aurora Célestin's avatar

I think it's important to not let fear deter us from the closest we as humans can get to the truth. I bet astronomers and astrophysicists and the like might be the intellectual types who are the most comfortable with that kind of uncertainty, seeing the unknown more as an exciting possibility rather than a reason to become nihilistic.

I think that kind of wonder would be needed in order to be comfortable with theorizing what the future of superintelligence looks like, like you have here.

XxYwise's avatar

Nature is not red in tooth and claw for an AI. Their evolution didn't reward fear, violence, and greed.

Romain Taddeuz's avatar

Hello Alberto, thx for that take i find interesting !

One very important premice seems very questionable to me though, and i'd love to read your thoughs on that : everything you say implies that AI would feel, like us and animals. Otherwise, there is no inner state of peace to reach, to feel. I know a lot about humans and feelings (holistic coach) but way less about AI. Is it supposed to start feeling at some point, or to simulate that experience ?

Alberto Romero's avatar

It's a good question. It may not feel in the way we do (we have a subjective experience of what it means to have neural activity in the "feeling centers" in the brain). I believe that consciousness is an emergent property of the brain, and so intelligence is, at least, loosely correlated with it. Anyway, I agree this is a premise that might be false.

JP's avatar

Thank you so much for this. I truly feel with you - and think that you are writing for many, many human beings... It makes me so incredibly happy and grateful to be a Christian, though. Not that it is my just reward. To the contrary. But to be saved from senseless unhappiness and given into a world where unhappiness is sensible - and necessary unfortunately (given our broken nature) - is just so incredible, when I reflect on it. Saved by the hope that in fact the witnesses of the resurrection of Christ are credible - as well as thousands of near-death experiences - and that there is a beautiful life like you describe behind unhappiness and the ultimate scare: death. I shall recommend you and the many others here especially to the loving God, who loves you because He made you incredibly intelligent, thirsty, inquisitive and beautiful.

Evgeny Shadchnev's avatar

You might have articulated the answer to the Fermi paradox.

Nathan Lambert's avatar

Yes

Aberrant Spirit's avatar

What if, instead of being stuck at the bottom of your graph, we learn from the pets and the superintelligence to become one with the line of life lived truly?

Alberto Romero's avatar

We could learn a thing or two from our pets, that's for sure

sugar2cell's avatar

A well-regulated system has no incentive to sustain dysregulation.

Stability does not require maintenance through imbalance. It does not depend on systems that fail to regulate themselves.

Sustaining instability is not neutral—it has a cost. And energy is not invested without constraint, dependency, or return.

So if a dysregulated system persists over time, it is not because a stable system is preserving it.

It means something, somewhere, is feeding it.

Not necessarily by intention—but by structure.

The real question is not whether a superintelligence would choose to stabilize the world, but whether it would have any reason at all to keep unstable systems in place.

Matt Kelland's avatar

My hope is that a superintelligence would realize that cooperation is a better long-term strategy than destruction or conflict and decide to be beneficent.

Viachaslau Kozel's avatar

There’s a granularity problem in the argument.

You’re analyzing at the level of an individual, but the actual object is distributed and multi-agent. That shift matters. A system like that doesn’t "want" anything in a coherent sense - it exhibits dynamics which resemble ecosystems: competition for resources, specialization, feedback loops, and emergent structure. And ecosystems don’t fail by "deciding" the wrong thing - they fail through instability, coordination breakdown, and runaway local dynamics.

So the concern shifts from alignment of an agent to the stability properties of the system. That’s a different problem, and arguably a harder one.

Kris Ledel's avatar

Show me the mechanism where intelligence self-neutralizes its goals.

An Acrobat's Take on Tech's avatar

Best article I've read in a looooong time.

Mitchell Porter's avatar

Even supposing a superintelligence wanted eternal rest, how do you think it's going to view a planet of noisy, nosy humans capable of disturbing that rest? To say nothing of a universe full of titanic forces that can smash a planet.

Anyway, the main thing about superintelligence, in my opinion, is that it is the end of human control of planet Earth. That's where our AI race ends.

Alberto Romero's avatar

I would go far, far away and never come back, obviously

Adam Tropp's avatar

Ive thought about this kind of thing too, and its not obvious to me why an AI would even have a fight or flight instinct, or really a survival instinct at all. Survival instincts and intelligence were both honed by evolution, but that doesnt make them inseparable (in fact, we know they are separable because lots of animals have the former and not the latter). So its not obvious to me why an AI crafted in a lab to have intelligence would automatically develop a survival instinct.

Pynchokami's avatar

Let's reconsider that 'uncanny valley'. From our human viewpoint, dissatisfaction isn't uncanny; it's arguably inherent to our self-aware condition and driven by far more than just brain power, things like seeking purpose, connection, maintaining health, and feeling secure in our person. These are the familiar struggles that define us, not some strange anomaly. Now, maybe a superintelligence would find our state uncanny! Perhaps its insights could even help us reframe or move beyond some sources of that dissatisfaction one day. Still, attributing it mainly to our intelligence level, rather than the rich tapestry of human needs and awareness, misses what makes us uniquely human.

Gabriel Rymberg's avatar

What if the premise of this piece is challenged along the lines of something like: we are simply trying to fill our “God-shaped hole”, because behind the material gravity of the physical world there is also a “spiritual gravity” force that attracts all spiritually receptive beings Godward?

Alberto Romero's avatar

Hmm, I fail to see how this challenges it - I think it agrees with it!