18 Comments

Isn’t it peculiar how suddenly superintelligence has come to stand synonym for existential risk?

All I see is men fencing with hypotheticals and vague thought experiments, letting their imagination take them for a ride, warning the world about generation x of this technology.

I’m sorry but I don’t buy it.

Andrew Ng said it best in his post on The Batch: “When I try to evaluate how realistic these arguments are, I find them frustratingly vague and nonspecific. They boil down to “it could happen.” Trying to prove it couldn’t is akin to proving a negative. I can’t prove that AI won’t drive humans to extinction any more than I can prove that radio waves emitted from Earth won’t lead space aliens to find us and wipe us out.”

Expand full comment
Nov 3, 2023Liked by Alberto Romero

Fear is a very reasonable option when not wanting to die. We evolved it fot a reason.

Expand full comment
Nov 3, 2023Liked by Alberto Romero

Thanks for this take, Alberto. I've been observing the gradual split into several camps on this issue and yours is definitely not the only voice that points out that existing big players stand to benefit from regulation in its emerging form. I sincerely hope we find a way to balance reasonable regulations with the need for competition and open-source efforts in this space.

Expand full comment

As usual, I will bellow that none of this matters. :-)

If AI is made perfectly safe, it will further fuel an accelerating knowledge explosion which will generate ever more, ever larger powers, which will also have to be managed somehow. As the emerging powers grow in number and scale, it becomes ever less likely that we will be able to successfully manage them all.

What we might learn from the AI Safety Summit is that the leaders of governments and the AI industry seem not to grasp the threat posed by the underlying knowledge explosion any better than we do. The "experts" appear to still be focused on particular threats arising from particular technologies, a focus seemingly based on an assumption that they will be able to manage all coming technologies as they emerge, one by one by one, no matter how large such technologies may be, or how fast they arrive.

The knowledge explosion itself, the engine generating all the emerging powers and whatever threats they present, appears to remain a matter of holy dogma which can't be challenged, much as it would have never occurred to those in medieval times to question the divinity of Jesus.

What I can learn from commenting on such matters is that I can keep typing such sermons endlessly until my very last breath, and none of that will matter either. :-)

Expand full comment