Isn’t it peculiar how suddenly superintelligence has come to stand synonym for existential risk?
All I see is men fencing with hypotheticals and vague thought experiments, letting their imagination take them for a ride, warning the world about generation x of this technology.
I’m sorry but I don’t buy it.
Andrew Ng said it best in his post on The Batch: “When I try to evaluate how realistic these arguments are, I find them frustratingly vague and nonspecific. They boil down to “it could happen.” Trying to prove it couldn’t is akin to proving a negative. I can’t prove that AI won’t drive humans to extinction any more than I can prove that radio waves emitted from Earth won’t lead space aliens to find us and wipe us out.”
Thanks for this take, Alberto. I've been observing the gradual split into several camps on this issue and yours is definitely not the only voice that points out that existing big players stand to benefit from regulation in its emerging form. I sincerely hope we find a way to balance reasonable regulations with the need for competition and open-source efforts in this space.
Thank you, Daniel. I'm still not sure what I think about the long-term AI risks but this seems rushed in a few ways and it's not surprising given that they have more than one reason to feel the urge to regulate fast and deeply.
I agree. But the issue is much more nuanced. Is AI how we kill all life and love? That's a very wild assumption. Also, it's absurd to mix in the same group of people who think fears of AI are exaggerated and people who advocate for humanity's end. Even if in this specific topic they express a similar opinion, it's a very shallow analysis to think that coincidental convergence extrapolates to any other belief or value.
I would say that a natural consequence of creating a similar competitor in a narrow niche, as demonstrated by biology, is extinction.
But more specifically is simple rationality: we also regulate cars and nuclear weapons, and they do not kill us all. Something with even a 5% chance of human(and biological, as a whole) extinction warrents significantly more caution.
I don't see the room for nuance while a train is hurtling toward your children. Once it is slowed down, you can always discuss and even resume lethal velocity, but first, I think, is to hit the brakes.
You can't expect the rules of biology to apply and not apply, at the same time, to AI.
Regulation is good if done well. I argue why I think this could be better and where the problems lie. It's explicitly stated: "I don’t think anyone argues in favor of no regulation"
"I don't see the room for nuance while a train is hurtling toward your children." That doesn't look like a good analogy to me. I've written many times about the problems that I see with AI and one of them is always overstating things, one way or the other. To me, your "analogy" is an example of that.
I dont see how I have argued how selection doesnt apply to AI vs humans. Dan Henricks has also explored this, as well as Geoffrey Hinton: even without the speedup in cognition, substrate indepence puts humanity at major risk.
The analogy might seem overt to you, but it seems the only logical one at the monent given the sword it holds over us
Hey Phil, that's a very nihilistic approach to this and all matters! Isn't it weird that you keep commenting the same thing if you truly believe it doesn't matter? There's a paradox in there!
Jokes aside, I think your narrative of the knowledge explosion has little power because you don't concretize it with examples. What does it mean that we don't manage it all? How are you so sure we can't manage it? What are the specific implications or consequences? You say we will have "ever larger" powers but what is that exactly? Why do you think knowledge is dangerous in and of itself? And if you are going to use the violence argument, know that, if anything, this "ever more" knowledge has allowed us to reduce violence around the world to historical minima - and that reduction draws a very consistently monotonous downward curve over time (except during wars and other conflicts) (source: https://slides.ourworldindata.org/war-and-violence/#/title-slide).
Even if we take into account nuclear weapons, which admittedly was not the best idea we've had, it doesn't seem to be the case that more knowledge is bad at all. Actually, the exact contrary thesis (that of effective accelerationists but applied to knowledge in general and not just technology) appears to be a more correct take despite it's already quite flawed.
Hey Phil, I like that you are so open to critique, reveals good character! I try to be, too.
About the Putin example, I have to say that it's still a thought experiment, not a concrete instance of what you believe could happen. So far, Putin hasn't dropped a nuclear bomb anywhere. What does make you think he will? From the rest of the world's point of view (i.e., everyone who is not the US) perhaps it's more dangerous that the US has those bombs, as it's the only country that has ever dropped one. Whatever your opinion of Putin is, that's not really a good argument if it can be refuted this easily, don't you think?
I agree that it's hard for this kind of paradigmatic argument to have any effect (it would entail a complete overhaul of our civilization if we did what you say) but the first step for it to matter is, I think, making it more robust to counterarguments. Making it more definite, more specific. Branching out possibilities. Making your own counterpoints and refuting those as well. Etc, etc.
Otherwise, people won't ever take it seriously but you won't know if it is because they don't care, because they can't afford to admit the terrible truth it reveals, or simply because it's not a good argument.
Isn’t it peculiar how suddenly superintelligence has come to stand synonym for existential risk?
All I see is men fencing with hypotheticals and vague thought experiments, letting their imagination take them for a ride, warning the world about generation x of this technology.
I’m sorry but I don’t buy it.
Andrew Ng said it best in his post on The Batch: “When I try to evaluate how realistic these arguments are, I find them frustratingly vague and nonspecific. They boil down to “it could happen.” Trying to prove it couldn’t is akin to proving a negative. I can’t prove that AI won’t drive humans to extinction any more than I can prove that radio waves emitted from Earth won’t lead space aliens to find us and wipe us out.”
Agreed Jurgen!
Fear is a very reasonable option when not wanting to die. We evolved it fot a reason.
It is not like we are facing a tiger hiding in the bushes.
Is it not against an even worse foe? Human greed to self-destruction. No tiger was as deadly as we are.
Thanks for this take, Alberto. I've been observing the gradual split into several camps on this issue and yours is definitely not the only voice that points out that existing big players stand to benefit from regulation in its emerging form. I sincerely hope we find a way to balance reasonable regulations with the need for competition and open-source efforts in this space.
Thank you, Daniel. I'm still not sure what I think about the long-term AI risks but this seems rushed in a few ways and it's not surprising given that they have more than one reason to feel the urge to regulate fast and deeply.
Regulation seems a minor concern compared to the death of all life and love, not to mention the harm to human creativity it already poses.
The fact that many who promote AI actively advocate for human extinction makes me very leery of giving them any credit.
I agree. But the issue is much more nuanced. Is AI how we kill all life and love? That's a very wild assumption. Also, it's absurd to mix in the same group of people who think fears of AI are exaggerated and people who advocate for humanity's end. Even if in this specific topic they express a similar opinion, it's a very shallow analysis to think that coincidental convergence extrapolates to any other belief or value.
I would say that a natural consequence of creating a similar competitor in a narrow niche, as demonstrated by biology, is extinction.
But more specifically is simple rationality: we also regulate cars and nuclear weapons, and they do not kill us all. Something with even a 5% chance of human(and biological, as a whole) extinction warrents significantly more caution.
I don't see the room for nuance while a train is hurtling toward your children. Once it is slowed down, you can always discuss and even resume lethal velocity, but first, I think, is to hit the brakes.
You can't expect the rules of biology to apply and not apply, at the same time, to AI.
Regulation is good if done well. I argue why I think this could be better and where the problems lie. It's explicitly stated: "I don’t think anyone argues in favor of no regulation"
"I don't see the room for nuance while a train is hurtling toward your children." That doesn't look like a good analogy to me. I've written many times about the problems that I see with AI and one of them is always overstating things, one way or the other. To me, your "analogy" is an example of that.
I dont see how I have argued how selection doesnt apply to AI vs humans. Dan Henricks has also explored this, as well as Geoffrey Hinton: even without the speedup in cognition, substrate indepence puts humanity at major risk.
The analogy might seem overt to you, but it seems the only logical one at the monent given the sword it holds over us
Hey Phil, that's a very nihilistic approach to this and all matters! Isn't it weird that you keep commenting the same thing if you truly believe it doesn't matter? There's a paradox in there!
Jokes aside, I think your narrative of the knowledge explosion has little power because you don't concretize it with examples. What does it mean that we don't manage it all? How are you so sure we can't manage it? What are the specific implications or consequences? You say we will have "ever larger" powers but what is that exactly? Why do you think knowledge is dangerous in and of itself? And if you are going to use the violence argument, know that, if anything, this "ever more" knowledge has allowed us to reduce violence around the world to historical minima - and that reduction draws a very consistently monotonous downward curve over time (except during wars and other conflicts) (source: https://slides.ourworldindata.org/war-and-violence/#/title-slide).
Even if we take into account nuclear weapons, which admittedly was not the best idea we've had, it doesn't seem to be the case that more knowledge is bad at all. Actually, the exact contrary thesis (that of effective accelerationists but applied to knowledge in general and not just technology) appears to be a more correct take despite it's already quite flawed.
Hey Phil, I like that you are so open to critique, reveals good character! I try to be, too.
About the Putin example, I have to say that it's still a thought experiment, not a concrete instance of what you believe could happen. So far, Putin hasn't dropped a nuclear bomb anywhere. What does make you think he will? From the rest of the world's point of view (i.e., everyone who is not the US) perhaps it's more dangerous that the US has those bombs, as it's the only country that has ever dropped one. Whatever your opinion of Putin is, that's not really a good argument if it can be refuted this easily, don't you think?
I agree that it's hard for this kind of paradigmatic argument to have any effect (it would entail a complete overhaul of our civilization if we did what you say) but the first step for it to matter is, I think, making it more robust to counterarguments. Making it more definite, more specific. Branching out possibilities. Making your own counterpoints and refuting those as well. Etc, etc.
Otherwise, people won't ever take it seriously but you won't know if it is because they don't care, because they can't afford to admit the terrible truth it reveals, or simply because it's not a good argument.
Does greed make the hunger for power more concrete? AI job losses to new opportunities?