There are risks, but there are really big risks of not going as fast as we can. The country that has the best AI will have a huge economic and military advantage. I don't want to be subject to military attacks from Russia or China that we can't counter. I don't want American goods to be too expensive to compete with those from China. We can know for sure Russia, China, and others are going as fast as they can and nobody will hold them back.
Suppose we had not built the bomb because we knew the risks, and Germany did.
I almost included a section about this but finally decided it wasn't sufficiently on-topic. Some people have commented on this. The bottom line: China doesn't seem to have any possibility to develop an edge over the West on this in 6 months.
Risk denialism is in our minds for good reason. I think stepping on a stone at least one time gives evidence that the risk is real, as opposed to some invented, imaginary theory. Once we saw bad things happen, we are convinced the risk is real. Of course, this assumes there are not good models to make predictions, but our evolutionary nature hasn’t evolved surrounded by good models.
I love your perspective, but I really gotta wonder who it is you're addressing with this sentence:
"Maybe they’re right that it's time 'to pause and reflect.'”
The juiced-up tech bros who are trying their damnedest to hop on the next multi-billion dollar express train? The sketchy tweakers who need a new fix, having lost their dose of crypto? The sensible leaders of the corporations that are FUCKING FIRING THEIR ETHICS TEAMS who tell us in the most patronizing tone possible that they understand the concerns of the doomsayers and fearmongers and are doing their level best to make sure that AI research proceeds in an ethical and safe way AFTER HAVING FUCKING FIRED THEIR ETHICS AND SAFETY TEAMS? The legislators/justices who have literally no clue whatsoever about what it is that should be paused and reflected over?
Is there any conceivable way that any of these main players and essential drivers in the pursuit of AGI might "pause," much less "slow down just a little bit above the speed limit of sane research and development"? Or *reflect*? Reflect in what? The sideview mirror, where objects we've already passed are now closer than they appear?
They won't. There will be no pausing, no reflection, no "hold up for one darn second, humanity!" There is profit in them thar transformers, and, by God, who in their right mind wouldn't strike out to find their fortune on the new frontier!
I completely agree with you--I just think that the stone metaphor you're using presupposes a tranquility that the representatives of the market forces just don't give a shit about.
I'm going to delve into the problem of unchecked "market forces" using Bill Gates' recent article and his second recommendation in the conclusion. I think now that AI has gone mainstream and policymakers are stepping in (or pretending to do so), these issues are more important than ever.
The thing about AI that has certain people’s backs up is that they are only now starting to realise how stupid they are, while they’re also starting to realise that others have absolutely no idea how stupid they are. Except this time the stupidity has a deeper prospective gravitas.
An insightful article, and much in the way of sources to delve into. I’m not quite sure where I sit. An incredible time to be a student of digital humanities and computer science however.
I’m only at the beginning of that journey but I’m hoping AI allows us to produce, create and live more! I have begun a small project getting AI to emulate the works of renowned poets discussing these very topics. it would be great if anyone has some feedback. The first is on “humanity becoming dependent on AI to preform tasks” - https://open.substack.com/pub/musingsofanai/p/the-tethered-souls
Whilst I broadly agree with Alberto's views on the FLI letter, I am not that pessimistic (yet). I think there is a way out, although even I have serious doubts if we, humans, will take it up. More - in the article i have just published on medium - https://sustensis.medium.com/prevail-or-fail-will-agi-emerge-by-2030-2fc048641b87.
I think the intention of the FLI open letter may be good, but the proposal may not be the most appropriate. LLM technologies advance at a much faster pace than laws and regulations. Throughout history, in many technological and industrial advances, due to the inertia of governments and regulatory bodies, the response to these advances and new technologies such as LLMs will take time and will come sooner or later.
The pace and momentum of technological progress in LLMs is unstoppable. This is a fact. The question is: "What adaptive response to continue our progress and evolution will be taken in the face of this fact"? Stopping things and "putting your head in a hole" does not seem the most appropriate. Nevertheless, at least this letter has something good: to open the debate.
Unless everyone agrees to stop the risk of a bad outcome would seem to increase. For example, the atomic bomb. Hitler was seeking to build one and would not have been dissuaded by any arguments of potential future harms. If Hitler were the only one with the A-bomb, would we have been better off?
As Alberto points out, we're not all that good at making predictions. This especially includes predictions about the risks and benefits from any developing technology and its socio-cultural sequelae. It's not at all clear that we can make a rational decision about whether or not to deploy a particular technology of this magnitude and complexity.
We are arguably better off today than we were 100 or 1000 or 10,000 years ago -- anyone volunteering to go back to those times? Humans are not homo sapiens but home faber -- we're not wise, we make stuff. Like all other technologies before it, so-called AI (LLMs) will have some bad impacts -- which we will have to mitigate -- but ultimately will improve human life.
One important difference between malicious AI and an atomic bomb are their potential for collateral damage.
People have a strong moral response to the indiscriminately infliction of damage, i.e., it feels profoundly unfair to bomb innocent bystanders with no agency. Whereas we feel less compassion for people who kill themselves after having an AI convince them to commit suicide, even if in total the latter far outnumber the former.
A nuclear mushroom is an easily understood and powerful image. Exposure to a slowly working, emotionally corrosive force like some social media barely registers.
AI falls into the second category. Our responses and awareness are maladapted for such a scenario.
Do you mean to say the total number of people who have been talked into suicide by the chatbot outnumber the people who have been killed by bombs? I don’t think that’s the case. Or do you mean that if this were to happen (the suicides started to outnumber the huge number of people killed by bombs) then people would not have compassion for those who commit suicide. Both things are awful, and people would have compassion for those in both scenarios. However, depending on how the scenarios were framed, they might have more of an investment in the tragedy of being talked into suicide. People turn away from questions about civilian casualties, possibly from guilt or possibly from horror or a sense of powerlessness. The victims of war are often treated as abstractions, perhaps because it’s unimaginable to us. But someone being talked into suicide is easier to imagine, and we could imagine this happening to people we love, or depressed young people. I suspect there would be a lot of concern for people if this happens to them.
I signed the letter, having misgivings about some aspects. These have all been raised online by others. I liked the accountability and watermark aspects as well as the general need for far more coordinated and independent thought (with or without their overall goal). There is hardly a petition I fully agree with and this one was far from the best. Still there are times when showing up is better than getting it perfectly right. On balance I felt it would be a useful added incentive for societies to look deeper into matters if sufficient numbers signed. There is a marked increase in discussion on the back of it. Some countries/states are bound to make better decisions than others, hopefully sooner due to the focus. There have been clear perspective shifts on the back of Cambridge Analytica and harmful social media effects in general. People are more aware of downsides. I agree with your statement that humanity seems to learn on the back of disasters. Societies can then improve, for a few generations, until collective memory fades. AI is already helping science in new ways. We will make use of it as we made use of radiation in reasonable and unreasonable ways, and fingers crossed we will not mess it up so bad there is no way back. We did not stop atom bombs but on balance we have not used them as much as I feared growing up during the cold war (I may be naive on the future). I think other risks are far more likely to wipe us out than AI itself (in its current stste). Its indirect impact on society, the ways in which it can be channeled to influence when embedded in social media is an insidious risk. Regulation and accountability can have impact. Right now societies can be freely used as a guinea pigs.
I don't agree but understand your concern. Science is one of the main aces our species has up its sleeve to survive. Or rather our imagination which underpins science and is our main survival skill. Ever more powerful applications are needed to address our problems. This also needs to be combined with regulation and considerable thought so society can adapt. We are essentially a highly developped primate society with all the problems that such a past currently entails. Our main task ahead is not to halt progress but to focus on ways to develop the way societies function. Currently it seems as if Athens and Sparta are once more at odds and that did not end well in the past.
It’s strange how people who argue this have wild presuppositions that don’t strike them as wild. 1) We’ll get an AGI very soon 2) The AGI will have desire-like motivations 3) Any creature with greater intelligence and desire-like motivations will (a) be capable of destroying the world (b) will definitely want to destroy the world. Of course, it’s something that can happen. These conclusions don’t really follow. Is there a better argument for these claims? Or any argument?
What you say is probably the most reasonable account of how this would happen. It’s very difficult to see why it would happen though. Many of these traits are biological, social, the product of millions of years of physical evolution. For example, we have physical appetites. It would be quite a scientific project, one which is fundamentally in its early stages, to understand the causes of many of our qualities. So of course, I am not in a position to say that they require physical appetites, emotions and desires that have biological origins, anxiety about scarcity, cultural knowledge, socialization, past experience of violence, status-seeking (both cultural and possibly somewhat instinctual), etc. Very few of these qualities would ‘naturally’ emerge from intelligence. When we finally see AGI it will have emergent qualities, and these are unpredictable--so the unpredictability is certainly a thing to be concerned about. It could simply go haywire in ways we cannot anticipate. However, they are unlikely to be married to our animal qualities, which is partly what you are describing because AI is not an animal.
I think an incredibly powerful tool is being unleashed. But before we get to the point where it turns on us, we will have stupid humans using it in malevolent ways--e.g., surveillance, data harvesting, scams and schemes, particularly financial schemes that will crash economies...it has some bio-weapons or conventional weapons usefulness. Plus there is the unemployment it will create and the sheer frustration we will experience being herded and shepherded by algorithms. And people will shirk responsibility and give it to the algorithm. They will defer to the algorithm. So we will be slaves to it in various ways--but simply because some humans will use it as a tool against other humans. That’s going to be the greatest danger for quite some time.
"For a single risk perhaps this is true. But when we start piling up the risks one on top of another we are introducing uncertainty in to the social environment, and uncertainty generates fear. Fear and uncertainty can be a source of harm even if they are based on nothing."
Definitely agree. My analysis is superficial, as you know. We could write many books on this and not arrive at any conclusion ;)
There are risks, but there are really big risks of not going as fast as we can. The country that has the best AI will have a huge economic and military advantage. I don't want to be subject to military attacks from Russia or China that we can't counter. I don't want American goods to be too expensive to compete with those from China. We can know for sure Russia, China, and others are going as fast as they can and nobody will hold them back.
Suppose we had not built the bomb because we knew the risks, and Germany did.
I almost included a section about this but finally decided it wasn't sufficiently on-topic. Some people have commented on this. The bottom line: China doesn't seem to have any possibility to develop an edge over the West on this in 6 months.
Risk denialism is in our minds for good reason. I think stepping on a stone at least one time gives evidence that the risk is real, as opposed to some invented, imaginary theory. Once we saw bad things happen, we are convinced the risk is real. Of course, this assumes there are not good models to make predictions, but our evolutionary nature hasn’t evolved surrounded by good models.
Agreed! I'm really not taking any stance here, just stating an observation about human nature.
I love your perspective, but I really gotta wonder who it is you're addressing with this sentence:
"Maybe they’re right that it's time 'to pause and reflect.'”
The juiced-up tech bros who are trying their damnedest to hop on the next multi-billion dollar express train? The sketchy tweakers who need a new fix, having lost their dose of crypto? The sensible leaders of the corporations that are FUCKING FIRING THEIR ETHICS TEAMS who tell us in the most patronizing tone possible that they understand the concerns of the doomsayers and fearmongers and are doing their level best to make sure that AI research proceeds in an ethical and safe way AFTER HAVING FUCKING FIRED THEIR ETHICS AND SAFETY TEAMS? The legislators/justices who have literally no clue whatsoever about what it is that should be paused and reflected over?
Is there any conceivable way that any of these main players and essential drivers in the pursuit of AGI might "pause," much less "slow down just a little bit above the speed limit of sane research and development"? Or *reflect*? Reflect in what? The sideview mirror, where objects we've already passed are now closer than they appear?
They won't. There will be no pausing, no reflection, no "hold up for one darn second, humanity!" There is profit in them thar transformers, and, by God, who in their right mind wouldn't strike out to find their fortune on the new frontier!
I completely agree with you--I just think that the stone metaphor you're using presupposes a tranquility that the representatives of the market forces just don't give a shit about.
I'm going to delve into the problem of unchecked "market forces" using Bill Gates' recent article and his second recommendation in the conclusion. I think now that AI has gone mainstream and policymakers are stepping in (or pretending to do so), these issues are more important than ever.
Completely agree with the sentiment here. That's why I can't help but conclude that "we're going to do it anyway."
The thing about AI that has certain people’s backs up is that they are only now starting to realise how stupid they are, while they’re also starting to realise that others have absolutely no idea how stupid they are. Except this time the stupidity has a deeper prospective gravitas.
An insightful article, and much in the way of sources to delve into. I’m not quite sure where I sit. An incredible time to be a student of digital humanities and computer science however.
I’m only at the beginning of that journey but I’m hoping AI allows us to produce, create and live more! I have begun a small project getting AI to emulate the works of renowned poets discussing these very topics. it would be great if anyone has some feedback. The first is on “humanity becoming dependent on AI to preform tasks” - https://open.substack.com/pub/musingsofanai/p/the-tethered-souls
Whilst I broadly agree with Alberto's views on the FLI letter, I am not that pessimistic (yet). I think there is a way out, although even I have serious doubts if we, humans, will take it up. More - in the article i have just published on medium - https://sustensis.medium.com/prevail-or-fail-will-agi-emerge-by-2030-2fc048641b87.
I really want to read arguments that ignite my dormant optimism about all this!
I will provide all those arguments soon.
I think the intention of the FLI open letter may be good, but the proposal may not be the most appropriate. LLM technologies advance at a much faster pace than laws and regulations. Throughout history, in many technological and industrial advances, due to the inertia of governments and regulatory bodies, the response to these advances and new technologies such as LLMs will take time and will come sooner or later.
The pace and momentum of technological progress in LLMs is unstoppable. This is a fact. The question is: "What adaptive response to continue our progress and evolution will be taken in the face of this fact"? Stopping things and "putting your head in a hole" does not seem the most appropriate. Nevertheless, at least this letter has something good: to open the debate.
Unless everyone agrees to stop the risk of a bad outcome would seem to increase. For example, the atomic bomb. Hitler was seeking to build one and would not have been dissuaded by any arguments of potential future harms. If Hitler were the only one with the A-bomb, would we have been better off?
As Alberto points out, we're not all that good at making predictions. This especially includes predictions about the risks and benefits from any developing technology and its socio-cultural sequelae. It's not at all clear that we can make a rational decision about whether or not to deploy a particular technology of this magnitude and complexity.
We are arguably better off today than we were 100 or 1000 or 10,000 years ago -- anyone volunteering to go back to those times? Humans are not homo sapiens but home faber -- we're not wise, we make stuff. Like all other technologies before it, so-called AI (LLMs) will have some bad impacts -- which we will have to mitigate -- but ultimately will improve human life.
One important difference between malicious AI and an atomic bomb are their potential for collateral damage.
People have a strong moral response to the indiscriminately infliction of damage, i.e., it feels profoundly unfair to bomb innocent bystanders with no agency. Whereas we feel less compassion for people who kill themselves after having an AI convince them to commit suicide, even if in total the latter far outnumber the former.
A nuclear mushroom is an easily understood and powerful image. Exposure to a slowly working, emotionally corrosive force like some social media barely registers.
AI falls into the second category. Our responses and awareness are maladapted for such a scenario.
Do you mean to say the total number of people who have been talked into suicide by the chatbot outnumber the people who have been killed by bombs? I don’t think that’s the case. Or do you mean that if this were to happen (the suicides started to outnumber the huge number of people killed by bombs) then people would not have compassion for those who commit suicide. Both things are awful, and people would have compassion for those in both scenarios. However, depending on how the scenarios were framed, they might have more of an investment in the tragedy of being talked into suicide. People turn away from questions about civilian casualties, possibly from guilt or possibly from horror or a sense of powerlessness. The victims of war are often treated as abstractions, perhaps because it’s unimaginable to us. But someone being talked into suicide is easier to imagine, and we could imagine this happening to people we love, or depressed young people. I suspect there would be a lot of concern for people if this happens to them.
I signed the letter, having misgivings about some aspects. These have all been raised online by others. I liked the accountability and watermark aspects as well as the general need for far more coordinated and independent thought (with or without their overall goal). There is hardly a petition I fully agree with and this one was far from the best. Still there are times when showing up is better than getting it perfectly right. On balance I felt it would be a useful added incentive for societies to look deeper into matters if sufficient numbers signed. There is a marked increase in discussion on the back of it. Some countries/states are bound to make better decisions than others, hopefully sooner due to the focus. There have been clear perspective shifts on the back of Cambridge Analytica and harmful social media effects in general. People are more aware of downsides. I agree with your statement that humanity seems to learn on the back of disasters. Societies can then improve, for a few generations, until collective memory fades. AI is already helping science in new ways. We will make use of it as we made use of radiation in reasonable and unreasonable ways, and fingers crossed we will not mess it up so bad there is no way back. We did not stop atom bombs but on balance we have not used them as much as I feared growing up during the cold war (I may be naive on the future). I think other risks are far more likely to wipe us out than AI itself (in its current stste). Its indirect impact on society, the ways in which it can be channeled to influence when embedded in social media is an insidious risk. Regulation and accountability can have impact. Right now societies can be freely used as a guinea pigs.
I don't agree but understand your concern. Science is one of the main aces our species has up its sleeve to survive. Or rather our imagination which underpins science and is our main survival skill. Ever more powerful applications are needed to address our problems. This also needs to be combined with regulation and considerable thought so society can adapt. We are essentially a highly developped primate society with all the problems that such a past currently entails. Our main task ahead is not to halt progress but to focus on ways to develop the way societies function. Currently it seems as if Athens and Sparta are once more at odds and that did not end well in the past.
It’s strange how people who argue this have wild presuppositions that don’t strike them as wild. 1) We’ll get an AGI very soon 2) The AGI will have desire-like motivations 3) Any creature with greater intelligence and desire-like motivations will (a) be capable of destroying the world (b) will definitely want to destroy the world. Of course, it’s something that can happen. These conclusions don’t really follow. Is there a better argument for these claims? Or any argument?
What you say is probably the most reasonable account of how this would happen. It’s very difficult to see why it would happen though. Many of these traits are biological, social, the product of millions of years of physical evolution. For example, we have physical appetites. It would be quite a scientific project, one which is fundamentally in its early stages, to understand the causes of many of our qualities. So of course, I am not in a position to say that they require physical appetites, emotions and desires that have biological origins, anxiety about scarcity, cultural knowledge, socialization, past experience of violence, status-seeking (both cultural and possibly somewhat instinctual), etc. Very few of these qualities would ‘naturally’ emerge from intelligence. When we finally see AGI it will have emergent qualities, and these are unpredictable--so the unpredictability is certainly a thing to be concerned about. It could simply go haywire in ways we cannot anticipate. However, they are unlikely to be married to our animal qualities, which is partly what you are describing because AI is not an animal.
I think an incredibly powerful tool is being unleashed. But before we get to the point where it turns on us, we will have stupid humans using it in malevolent ways--e.g., surveillance, data harvesting, scams and schemes, particularly financial schemes that will crash economies...it has some bio-weapons or conventional weapons usefulness. Plus there is the unemployment it will create and the sheer frustration we will experience being herded and shepherded by algorithms. And people will shirk responsibility and give it to the algorithm. They will defer to the algorithm. So we will be slaves to it in various ways--but simply because some humans will use it as a tool against other humans. That’s going to be the greatest danger for quite some time.
"For a single risk perhaps this is true. But when we start piling up the risks one on top of another we are introducing uncertainty in to the social environment, and uncertainty generates fear. Fear and uncertainty can be a source of harm even if they are based on nothing."
Definitely agree. My analysis is superficial, as you know. We could write many books on this and not arrive at any conclusion ;)