Isn’t it peculiar how suddenly superintelligence has come to stand synonym for existential risk?
All I see is men fencing with hypotheticals and vague thought experiments, letting their imagination take them for a ride, warning the world about generation x of this technology.
I’m sorry but I don’t buy it.
Andrew Ng said it best in his post on The Batch: “When I try to evaluate how realistic these arguments are, I find them frustratingly vague and nonspecific. They boil down to “it could happen.” Trying to prove it couldn’t is akin to proving a negative. I can’t prove that AI won’t drive humans to extinction any more than I can prove that radio waves emitted from Earth won’t lead space aliens to find us and wipe us out.”
Thanks for this take, Alberto. I've been observing the gradual split into several camps on this issue and yours is definitely not the only voice that points out that existing big players stand to benefit from regulation in its emerging form. I sincerely hope we find a way to balance reasonable regulations with the need for competition and open-source efforts in this space.
Thank you, Daniel. I'm still not sure what I think about the long-term AI risks but this seems rushed in a few ways and it's not surprising given that they have more than one reason to feel the urge to regulate fast and deeply.
I agree. But the issue is much more nuanced. Is AI how we kill all life and love? That's a very wild assumption. Also, it's absurd to mix in the same group of people who think fears of AI are exaggerated and people who advocate for humanity's end. Even if in this specific topic they express a similar opinion, it's a very shallow analysis to think that coincidental convergence extrapolates to any other belief or value.
I would say that a natural consequence of creating a similar competitor in a narrow niche, as demonstrated by biology, is extinction.
But more specifically is simple rationality: we also regulate cars and nuclear weapons, and they do not kill us all. Something with even a 5% chance of human(and biological, as a whole) extinction warrents significantly more caution.
I don't see the room for nuance while a train is hurtling toward your children. Once it is slowed down, you can always discuss and even resume lethal velocity, but first, I think, is to hit the brakes.
You can't expect the rules of biology to apply and not apply, at the same time, to AI.
Regulation is good if done well. I argue why I think this could be better and where the problems lie. It's explicitly stated: "I don’t think anyone argues in favor of no regulation"
"I don't see the room for nuance while a train is hurtling toward your children." That doesn't look like a good analogy to me. I've written many times about the problems that I see with AI and one of them is always overstating things, one way or the other. To me, your "analogy" is an example of that.
I dont see how I have argued how selection doesnt apply to AI vs humans. Dan Henricks has also explored this, as well as Geoffrey Hinton: even without the speedup in cognition, substrate indepence puts humanity at major risk.
The analogy might seem overt to you, but it seems the only logical one at the monent given the sword it holds over us
As usual, I will bellow that none of this matters. :-)
If AI is made perfectly safe, it will further fuel an accelerating knowledge explosion which will generate ever more, ever larger powers, which will also have to be managed somehow. As the emerging powers grow in number and scale, it becomes ever less likely that we will be able to successfully manage them all.
What we might learn from the AI Safety Summit is that the leaders of governments and the AI industry seem not to grasp the threat posed by the underlying knowledge explosion any better than we do. The "experts" appear to still be focused on particular threats arising from particular technologies, a focus seemingly based on an assumption that they will be able to manage all coming technologies as they emerge, one by one by one, no matter how large such technologies may be, or how fast they arrive.
The knowledge explosion itself, the engine generating all the emerging powers and whatever threats they present, appears to remain a matter of holy dogma which can't be challenged, much as it would have never occurred to those in medieval times to question the divinity of Jesus.
What I can learn from commenting on such matters is that I can keep typing such sermons endlessly until my very last breath, and none of that will matter either. :-)
Hey Phil, that's a very nihilistic approach to this and all matters! Isn't it weird that you keep commenting the same thing if you truly believe it doesn't matter? There's a paradox in there!
Jokes aside, I think your narrative of the knowledge explosion has little power because you don't concretize it with examples. What does it mean that we don't manage it all? How are you so sure we can't manage it? What are the specific implications or consequences? You say we will have "ever larger" powers but what is that exactly? Why do you think knowledge is dangerous in and of itself? And if you are going to use the violence argument, know that, if anything, this "ever more" knowledge has allowed us to reduce violence around the world to historical minima - and that reduction draws a very consistently monotonous downward curve over time (except during wars and other conflicts) (source: https://slides.ourworldindata.org/war-and-violence/#/title-slide).
Even if we take into account nuclear weapons, which admittedly was not the best idea we've had, it doesn't seem to be the case that more knowledge is bad at all. Actually, the exact contrary thesis (that of effective accelerationists but applied to knowledge in general and not just technology) appears to be a more correct take despite it's already quite flawed.
Well, correction welcomed, obviously all this matters. What I should have said is that there appears to be nothing we can do about it. And yes, paradox abounds in my writing life. It extends far beyond this subject. Too long of a story to tell here though.
I also agree, and have been told, that my pitch about the knowledge explosion exists on "too high a level of abstraction". I think that's a fair critique. So let's try a simple example...
How much more power do we want people like Putin to have?
What difference does it make that the accumulation of knowledge has accomplished this or that if a single human being can erase all those accomplishments in minutes?
I appreciate your questions, and our ongoing dialog, which I find enjoyable.
But it must be said, I've been writing about this for years now, and I see little evidence doing so is accomplishing anything. My arguments could surely be improved in more skilled hands, but in the end I'm not sure that would matter. I think the real problem is that all writers can offer is reasoned calculations, and that is the wrong channel. If the arguments were perfect, they would still be on the wrong channel.
My best guess for the moment is that any real progress on this subject is going to require some dramatic real world event(s) which is easily understood by a broad public. We humans typically learn more from pain than we do from reason.
Hey Phil, I like that you are so open to critique, reveals good character! I try to be, too.
About the Putin example, I have to say that it's still a thought experiment, not a concrete instance of what you believe could happen. So far, Putin hasn't dropped a nuclear bomb anywhere. What does make you think he will? From the rest of the world's point of view (i.e., everyone who is not the US) perhaps it's more dangerous that the US has those bombs, as it's the only country that has ever dropped one. Whatever your opinion of Putin is, that's not really a good argument if it can be refuted this easily, don't you think?
I agree that it's hard for this kind of paradigmatic argument to have any effect (it would entail a complete overhaul of our civilization if we did what you say) but the first step for it to matter is, I think, making it more robust to counterarguments. Making it more definite, more specific. Branching out possibilities. Making your own counterpoints and refuting those as well. Etc, etc.
Otherwise, people won't ever take it seriously but you won't know if it is because they don't care, because they can't afford to admit the terrible truth it reveals, or simply because it's not a good argument.
As I understand it, the threat arises not so much from particular situations as it does from the larger pattern which all the particular situations are part of.
So for example, Putin may never use nukes. Maybe nobody ever will. That could be. Maybe AI is no where near as dangerous as some currently think it is. That could be too. My arguments are necessarily abstract, because it's not possible to prove that any particular situation will happen, or not happen.
So I have to ask abstract questions like this:
Do we think there is any limit to the human ability to successfully manage power?
If we answer yes, then it follows that we are currently traveling towards that limit, whatever that limit may be. We can reasonably debate what the limit of our ability is, and how fast we are moving towards it. But no one can provide a definitive answer to such questions, so speculation regarding such details isn't very useful.
Getting back to AI, which is of course the topic of this blog...
The real danger of AI may not be AI itself. Maybe AI doomerism is overblown hype, that could be.
Maybe the real danger presented by AI is the way that AI will act as a further accelerant to the knowledge explosion, much as computers and the Internet have. Maybe the real danger arises not from AI, but from some future power of vast scale which arises from an AI accelerated knowledge explosion.
If the knowledge explosion continues on it's current course, and if human ability is limited, logic seems to dictate that sooner or later we inevitably arrive at some collection of powers which exceed our ability to manage.
I know, and I agree, this is all too abstract. It's what I know how to do, that's all. I'm happy to invite any one to try their hand at this topic and make it more concrete.
Isn’t it peculiar how suddenly superintelligence has come to stand synonym for existential risk?
All I see is men fencing with hypotheticals and vague thought experiments, letting their imagination take them for a ride, warning the world about generation x of this technology.
I’m sorry but I don’t buy it.
Andrew Ng said it best in his post on The Batch: “When I try to evaluate how realistic these arguments are, I find them frustratingly vague and nonspecific. They boil down to “it could happen.” Trying to prove it couldn’t is akin to proving a negative. I can’t prove that AI won’t drive humans to extinction any more than I can prove that radio waves emitted from Earth won’t lead space aliens to find us and wipe us out.”
Agreed Jurgen!
Fear is a very reasonable option when not wanting to die. We evolved it fot a reason.
It is not like we are facing a tiger hiding in the bushes.
Is it not against an even worse foe? Human greed to self-destruction. No tiger was as deadly as we are.
Thanks for this take, Alberto. I've been observing the gradual split into several camps on this issue and yours is definitely not the only voice that points out that existing big players stand to benefit from regulation in its emerging form. I sincerely hope we find a way to balance reasonable regulations with the need for competition and open-source efforts in this space.
Thank you, Daniel. I'm still not sure what I think about the long-term AI risks but this seems rushed in a few ways and it's not surprising given that they have more than one reason to feel the urge to regulate fast and deeply.
Regulation seems a minor concern compared to the death of all life and love, not to mention the harm to human creativity it already poses.
The fact that many who promote AI actively advocate for human extinction makes me very leery of giving them any credit.
I agree. But the issue is much more nuanced. Is AI how we kill all life and love? That's a very wild assumption. Also, it's absurd to mix in the same group of people who think fears of AI are exaggerated and people who advocate for humanity's end. Even if in this specific topic they express a similar opinion, it's a very shallow analysis to think that coincidental convergence extrapolates to any other belief or value.
I would say that a natural consequence of creating a similar competitor in a narrow niche, as demonstrated by biology, is extinction.
But more specifically is simple rationality: we also regulate cars and nuclear weapons, and they do not kill us all. Something with even a 5% chance of human(and biological, as a whole) extinction warrents significantly more caution.
I don't see the room for nuance while a train is hurtling toward your children. Once it is slowed down, you can always discuss and even resume lethal velocity, but first, I think, is to hit the brakes.
You can't expect the rules of biology to apply and not apply, at the same time, to AI.
Regulation is good if done well. I argue why I think this could be better and where the problems lie. It's explicitly stated: "I don’t think anyone argues in favor of no regulation"
"I don't see the room for nuance while a train is hurtling toward your children." That doesn't look like a good analogy to me. I've written many times about the problems that I see with AI and one of them is always overstating things, one way or the other. To me, your "analogy" is an example of that.
I dont see how I have argued how selection doesnt apply to AI vs humans. Dan Henricks has also explored this, as well as Geoffrey Hinton: even without the speedup in cognition, substrate indepence puts humanity at major risk.
The analogy might seem overt to you, but it seems the only logical one at the monent given the sword it holds over us
As usual, I will bellow that none of this matters. :-)
If AI is made perfectly safe, it will further fuel an accelerating knowledge explosion which will generate ever more, ever larger powers, which will also have to be managed somehow. As the emerging powers grow in number and scale, it becomes ever less likely that we will be able to successfully manage them all.
What we might learn from the AI Safety Summit is that the leaders of governments and the AI industry seem not to grasp the threat posed by the underlying knowledge explosion any better than we do. The "experts" appear to still be focused on particular threats arising from particular technologies, a focus seemingly based on an assumption that they will be able to manage all coming technologies as they emerge, one by one by one, no matter how large such technologies may be, or how fast they arrive.
The knowledge explosion itself, the engine generating all the emerging powers and whatever threats they present, appears to remain a matter of holy dogma which can't be challenged, much as it would have never occurred to those in medieval times to question the divinity of Jesus.
What I can learn from commenting on such matters is that I can keep typing such sermons endlessly until my very last breath, and none of that will matter either. :-)
Hey Phil, that's a very nihilistic approach to this and all matters! Isn't it weird that you keep commenting the same thing if you truly believe it doesn't matter? There's a paradox in there!
Jokes aside, I think your narrative of the knowledge explosion has little power because you don't concretize it with examples. What does it mean that we don't manage it all? How are you so sure we can't manage it? What are the specific implications or consequences? You say we will have "ever larger" powers but what is that exactly? Why do you think knowledge is dangerous in and of itself? And if you are going to use the violence argument, know that, if anything, this "ever more" knowledge has allowed us to reduce violence around the world to historical minima - and that reduction draws a very consistently monotonous downward curve over time (except during wars and other conflicts) (source: https://slides.ourworldindata.org/war-and-violence/#/title-slide).
Even if we take into account nuclear weapons, which admittedly was not the best idea we've had, it doesn't seem to be the case that more knowledge is bad at all. Actually, the exact contrary thesis (that of effective accelerationists but applied to knowledge in general and not just technology) appears to be a more correct take despite it's already quite flawed.
Hi Alberto!
Well, correction welcomed, obviously all this matters. What I should have said is that there appears to be nothing we can do about it. And yes, paradox abounds in my writing life. It extends far beyond this subject. Too long of a story to tell here though.
I also agree, and have been told, that my pitch about the knowledge explosion exists on "too high a level of abstraction". I think that's a fair critique. So let's try a simple example...
How much more power do we want people like Putin to have?
What difference does it make that the accumulation of knowledge has accomplished this or that if a single human being can erase all those accomplishments in minutes?
I appreciate your questions, and our ongoing dialog, which I find enjoyable.
But it must be said, I've been writing about this for years now, and I see little evidence doing so is accomplishing anything. My arguments could surely be improved in more skilled hands, but in the end I'm not sure that would matter. I think the real problem is that all writers can offer is reasoned calculations, and that is the wrong channel. If the arguments were perfect, they would still be on the wrong channel.
My best guess for the moment is that any real progress on this subject is going to require some dramatic real world event(s) which is easily understood by a broad public. We humans typically learn more from pain than we do from reason.
Hey Phil, I like that you are so open to critique, reveals good character! I try to be, too.
About the Putin example, I have to say that it's still a thought experiment, not a concrete instance of what you believe could happen. So far, Putin hasn't dropped a nuclear bomb anywhere. What does make you think he will? From the rest of the world's point of view (i.e., everyone who is not the US) perhaps it's more dangerous that the US has those bombs, as it's the only country that has ever dropped one. Whatever your opinion of Putin is, that's not really a good argument if it can be refuted this easily, don't you think?
I agree that it's hard for this kind of paradigmatic argument to have any effect (it would entail a complete overhaul of our civilization if we did what you say) but the first step for it to matter is, I think, making it more robust to counterarguments. Making it more definite, more specific. Branching out possibilities. Making your own counterpoints and refuting those as well. Etc, etc.
Otherwise, people won't ever take it seriously but you won't know if it is because they don't care, because they can't afford to admit the terrible truth it reveals, or simply because it's not a good argument.
As I understand it, the threat arises not so much from particular situations as it does from the larger pattern which all the particular situations are part of.
So for example, Putin may never use nukes. Maybe nobody ever will. That could be. Maybe AI is no where near as dangerous as some currently think it is. That could be too. My arguments are necessarily abstract, because it's not possible to prove that any particular situation will happen, or not happen.
So I have to ask abstract questions like this:
Do we think there is any limit to the human ability to successfully manage power?
If we answer yes, then it follows that we are currently traveling towards that limit, whatever that limit may be. We can reasonably debate what the limit of our ability is, and how fast we are moving towards it. But no one can provide a definitive answer to such questions, so speculation regarding such details isn't very useful.
Getting back to AI, which is of course the topic of this blog...
The real danger of AI may not be AI itself. Maybe AI doomerism is overblown hype, that could be.
Maybe the real danger presented by AI is the way that AI will act as a further accelerant to the knowledge explosion, much as computers and the Internet have. Maybe the real danger arises not from AI, but from some future power of vast scale which arises from an AI accelerated knowledge explosion.
If the knowledge explosion continues on it's current course, and if human ability is limited, logic seems to dictate that sooner or later we inevitably arrive at some collection of powers which exceed our ability to manage.
I know, and I agree, this is all too abstract. It's what I know how to do, that's all. I'm happy to invite any one to try their hand at this topic and make it more concrete.
Does greed make the hunger for power more concrete? AI job losses to new opportunities?