There are risks, but there are really big risks of not going as fast as we can. The country that has the best AI will have a huge economic and military advantage. I don't want to be subject to military attacks from Russia or China that we can't counter. I don't want American goods to be too expensive to compete with those from China. We can know for sure Russia, China, and others are going as fast as they can and nobody will hold them back.
Suppose we had not built the bomb because we knew the risks, and Germany did.
I almost included a section about this but finally decided it wasn't sufficiently on-topic. Some people have commented on this. The bottom line: China doesn't seem to have any possibility to develop an edge over the West on this in 6 months.
Risk denialism is in our minds for good reason. I think stepping on a stone at least one time gives evidence that the risk is real, as opposed to some invented, imaginary theory. Once we saw bad things happen, we are convinced the risk is real. Of course, this assumes there are not good models to make predictions, but our evolutionary nature hasn’t evolved surrounded by good models.
Alberto writes, "A risk isn’t yet a harm because it hasn’t happened."
For a single risk perhaps this is true. But when we start piling up the risks one on top of another we are introducing uncertainty in to the social environment, and uncertainty generates fear. Fear and uncertainty can be a source of harm even if they are based on nothing.
Civilization is based on faith in the future. It's that faith in a better tomorrow that keeps people getting up everyday to go to jobs they don't enjoy. When such faith begins to crumble we see phenomena like many of today's young people deciding not to have children because they've lost faith that we can manage climate change.
The economic system is built on faith that if one invests one will see a positive return. If people lose faith in the future, they stop making such investments, and the wheels of the economy start grinding to a halt.
The entire system is built on faith, and if we introduce too many unknowns too fast we put that foundation at risk.
"For a single risk perhaps this is true. But when we start piling up the risks one on top of another we are introducing uncertainty in to the social environment, and uncertainty generates fear. Fear and uncertainty can be a source of harm even if they are based on nothing."
Definitely agree. My analysis is superficial, as you know. We could write many books on this and not arrive at any conclusion ;)
I love your perspective, but I really gotta wonder who it is you're addressing with this sentence:
"Maybe they’re right that it's time 'to pause and reflect.'”
The juiced-up tech bros who are trying their damnedest to hop on the next multi-billion dollar express train? The sketchy tweakers who need a new fix, having lost their dose of crypto? The sensible leaders of the corporations that are FUCKING FIRING THEIR ETHICS TEAMS who tell us in the most patronizing tone possible that they understand the concerns of the doomsayers and fearmongers and are doing their level best to make sure that AI research proceeds in an ethical and safe way AFTER HAVING FUCKING FIRED THEIR ETHICS AND SAFETY TEAMS? The legislators/justices who have literally no clue whatsoever about what it is that should be paused and reflected over?
Is there any conceivable way that any of these main players and essential drivers in the pursuit of AGI might "pause," much less "slow down just a little bit above the speed limit of sane research and development"? Or *reflect*? Reflect in what? The sideview mirror, where objects we've already passed are now closer than they appear?
They won't. There will be no pausing, no reflection, no "hold up for one darn second, humanity!" There is profit in them thar transformers, and, by God, who in their right mind wouldn't strike out to find their fortune on the new frontier!
I completely agree with you--I just think that the stone metaphor you're using presupposes a tranquility that the representatives of the market forces just don't give a shit about.
I'm going to delve into the problem of unchecked "market forces" using Bill Gates' recent article and his second recommendation in the conclusion. I think now that AI has gone mainstream and policymakers are stepping in (or pretending to do so), these issues are more important than ever.
The thing about AI that has certain people’s backs up is that they are only now starting to realise how stupid they are, while they’re also starting to realise that others have absolutely no idea how stupid they are. Except this time the stupidity has a deeper prospective gravitas.
Excellent as usual Alberto. As you predicted, there is much in your article that I can embrace and agree with. And thanks much for the mention.
Yes, we don't learn by reasoning anywhere near as much as we like to believe. A reference to authority is more influential, and our most persuasive teacher is pain. As example, I've come to believe that nothing meaningful is going to happen on nuclear weapons until after the next detonation. We just can't get it in the abstract, we have to see it to believe it. What happens after that is anybody's guess.
You told us that Oppenheimer said, "scientists must expand man’s understanding and control of nature". This is the kind of simplistic, outdated 19th century thinking that I've been writing to reject. What scientists need help with is understanding _human_ nature, which does not allow for the acquisition of ever more knowledge at an accelerating rate without limit. In the 21st century we have to become more intelligent than that.
My hope for the AI community is that we might invest some time in to zooming out to reflect on the larger environment which AI research inhabits. AI is perhaps not the problem so much as it is a symptom of the problem, an outdated relationship with knowledge. Underneath all the technical issues lies a serious philosophical challenge, the need to update our relationship with knowledge to adapt to the new environment which the spectacular success of science has created. We would be wise to recall that species which can't adapt to changing conditions typically don't last long.
The real danger from AI may be that it seems likely to serve as rocket fuel pored on an already overheated knowledge explosion. The most dangerous threats may arise not so much from AI itself, as from all the different research areas which AI is likely to further accelerate.
Knowledge is good. Knowledge without limit is not. It's not that complicated.
An insightful article, and much in the way of sources to delve into. I’m not quite sure where I sit. An incredible time to be a student of digital humanities and computer science however.
I’m only at the beginning of that journey but I’m hoping AI allows us to produce, create and live more! I have begun a small project getting AI to emulate the works of renowned poets discussing these very topics. it would be great if anyone has some feedback. The first is on “humanity becoming dependent on AI to preform tasks” - https://open.substack.com/pub/musingsofanai/p/the-tethered-souls
Whilst I broadly agree with Alberto's views on the FLI letter, I am not that pessimistic (yet). I think there is a way out, although even I have serious doubts if we, humans, will take it up. More - in the article i have just published on medium - https://sustensis.medium.com/prevail-or-fail-will-agi-emerge-by-2030-2fc048641b87.
I think the intention of the FLI open letter may be good, but the proposal may not be the most appropriate. LLM technologies advance at a much faster pace than laws and regulations. Throughout history, in many technological and industrial advances, due to the inertia of governments and regulatory bodies, the response to these advances and new technologies such as LLMs will take time and will come sooner or later.
The pace and momentum of technological progress in LLMs is unstoppable. This is a fact. The question is: "What adaptive response to continue our progress and evolution will be taken in the face of this fact"? Stopping things and "putting your head in a hole" does not seem the most appropriate. Nevertheless, at least this letter has something good: to open the debate.
Unless everyone agrees to stop the risk of a bad outcome would seem to increase. For example, the atomic bomb. Hitler was seeking to build one and would not have been dissuaded by any arguments of potential future harms. If Hitler were the only one with the A-bomb, would we have been better off?
As Alberto points out, we're not all that good at making predictions. This especially includes predictions about the risks and benefits from any developing technology and its socio-cultural sequelae. It's not at all clear that we can make a rational decision about whether or not to deploy a particular technology of this magnitude and complexity.
We are arguably better off today than we were 100 or 1000 or 10,000 years ago -- anyone volunteering to go back to those times? Humans are not homo sapiens but home faber -- we're not wise, we make stuff. Like all other technologies before it, so-called AI (LLMs) will have some bad impacts -- which we will have to mitigate -- but ultimately will improve human life.
One important difference between malicious AI and an atomic bomb are their potential for collateral damage.
People have a strong moral response to the indiscriminately infliction of damage, i.e., it feels profoundly unfair to bomb innocent bystanders with no agency. Whereas we feel less compassion for people who kill themselves after having an AI convince them to commit suicide, even if in total the latter far outnumber the former.
A nuclear mushroom is an easily understood and powerful image. Exposure to a slowly working, emotionally corrosive force like some social media barely registers.
AI falls into the second category. Our responses and awareness are maladapted for such a scenario.
Do you mean to say the total number of people who have been talked into suicide by the chatbot outnumber the people who have been killed by bombs? I don’t think that’s the case. Or do you mean that if this were to happen (the suicides started to outnumber the huge number of people killed by bombs) then people would not have compassion for those who commit suicide. Both things are awful, and people would have compassion for those in both scenarios. However, depending on how the scenarios were framed, they might have more of an investment in the tragedy of being talked into suicide. People turn away from questions about civilian casualties, possibly from guilt or possibly from horror or a sense of powerlessness. The victims of war are often treated as abstractions, perhaps because it’s unimaginable to us. But someone being talked into suicide is easier to imagine, and we could imagine this happening to people we love, or depressed young people. I suspect there would be a lot of concern for people if this happens to them.
I signed the letter, having misgivings about some aspects. These have all been raised online by others. I liked the accountability and watermark aspects as well as the general need for far more coordinated and independent thought (with or without their overall goal). There is hardly a petition I fully agree with and this one was far from the best. Still there are times when showing up is better than getting it perfectly right. On balance I felt it would be a useful added incentive for societies to look deeper into matters if sufficient numbers signed. There is a marked increase in discussion on the back of it. Some countries/states are bound to make better decisions than others, hopefully sooner due to the focus. There have been clear perspective shifts on the back of Cambridge Analytica and harmful social media effects in general. People are more aware of downsides. I agree with your statement that humanity seems to learn on the back of disasters. Societies can then improve, for a few generations, until collective memory fades. AI is already helping science in new ways. We will make use of it as we made use of radiation in reasonable and unreasonable ways, and fingers crossed we will not mess it up so bad there is no way back. We did not stop atom bombs but on balance we have not used them as much as I feared growing up during the cold war (I may be naive on the future). I think other risks are far more likely to wipe us out than AI itself (in its current stste). Its indirect impact on society, the ways in which it can be channeled to influence when embedded in social media is an insidious risk. Regulation and accountability can have impact. Right now societies can be freely used as a guinea pigs.
You write, "AI is already helping science in new ways."
I would argue that the primary challenge for science today is not technical, but philosophical. We're failing to edit our relationship with knowledge so as to adapt to the radically new environment created by the success of science. We keep pushing for "more, more and more" knowledge as if we're still living in the long era of knowledge scarcity. Using AI to further accelerate an already overheated knowledge explosion is an example of that philosophical failure in action.
Modern science culture is looking backwards, not forward as they assume.
Those who wish to argue that science should continue to pursue ever more knowledge at an ever faster pace should provide the proof that human societies can successfully manage ever more, ever larger powers, delivered ever more quickly. They should demonstrate that we are creatures of unlimited ability. The fact that so few elites seem to even realize that we bear this proof burden demonstrates to me that we are not creatures of unlimited ability, and thus should not be seeking unlimited knowledge at an unlimited pace.
If we insist on using AI to further accelerate science, the outcome will be that we will face ever more challenges such as the signers of the FLI letter are wrestling with. Maybe we can figure out what to do about current AI concerns, that is unknown. What can be known is that if we keep accelerating the underlying knowledge development process it's inevitable that we will sooner or later arrive at challenges that we can't meet.
In earlier decades this was referred to as the Peter Principle, where an effective employee keeps getting promoted until they finally reach a position that they aren't qualified for.
I don't agree but understand your concern. Science is one of the main aces our species has up its sleeve to survive. Or rather our imagination which underpins science and is our main survival skill. Ever more powerful applications are needed to address our problems. This also needs to be combined with regulation and considerable thought so society can adapt. We are essentially a highly developped primate society with all the problems that such a past currently entails. Our main task ahead is not to halt progress but to focus on ways to develop the way societies function. Currently it seems as if Athens and Sparta are once more at odds and that did not end well in the past.
I'm against ever more science at an ever faster pace without limit. I'm against stubborn stupidity. Such a simple minded outdated relationship with knowledge is not an "ace up our sleeve". It's a philosophical failure. It's a clinging to the past. It's a willful rejection of the need to adapt to the new environment which the success of science has created. It's a death sentence for the modern world.
Some science is good. More science without limit is not. The fact that even the brightest most highly educated among us so often don't grasp the difference is just more evidence that we aren't ready for AI.
Alberto, thanks for the link to Yudkowsky's take on the FLI letter, and AI generally. I was delighted to find an expert who makes me look like a calm reasonable person of nuance. :-)
If/when your time and interest permits, I'd welcome an article that introduces Yudkowsky to those of us "not in the know". I'm mostly interested in how Yudkowsky and his perspective is regarded by the AI community at large. Is he considered a visionary, a crackpot, an extremist, a leader etc. Is he influential, ignored, respected or disregarded etc. Or, something else?
In his Time piece he writes...
"If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter."
And this...
"Many researchers working on these systems think that we’re plunging toward a catastrophe, with more of them daring to say it in private than in public; but they think that they can’t unilaterally stop the forward plunge, that others will go on even if they personally quit their jobs. And so they all think they might as well keep going."
And this...
"Some of my friends have recently reported to me that when people outside the AI industry hear about extinction risk from Artificial General Intelligence for the first time, their reaction is “maybe we should not build AGI, then". Hearing this gave me a tiny flash of hope, because it’s a simpler, more sensible, and frankly saner reaction than I’ve been hearing over the last 20 years of trying to get anyone in the industry to take things seriously. Anyone talking that sanely deserves to hear how bad the situation actually is, and not be told that a six-month moratorium is going to fix it."
It's been an interesting education to think I'm Mr. Radical, and then find out somebody smarter than me already has the job. Maybe he'll let me wash his car, and do his laundry or something. :-)
It’s strange how people who argue this have wild presuppositions that don’t strike them as wild. 1) We’ll get an AGI very soon 2) The AGI will have desire-like motivations 3) Any creature with greater intelligence and desire-like motivations will (a) be capable of destroying the world (b) will definitely want to destroy the world. Of course, it’s something that can happen. These conclusions don’t really follow. Is there a better argument for these claims? Or any argument?
Yudkowsky can speak for himself of course. I can only report my first impression of his perspective. I take that perspective to be that it could happen, so we'd better make sure it doesn't.
I don't know if there is a better argument. But here's a bit of an article I'll be publishing tomorrow.
Consider the relationship between human parent and child. The child is not a duplicate of the parent, it has it's own identity. But the child is built upon a foundation of what has come before. Every generation is the latest version of the previous generations.
We are the parent of AI. AI won't inherit our bodies, but it inevitably has to inherit some properties of the human mind.
We'll build some of our beliefs and assumptions in to AI, perhaps without even realizing it. As example, we typically assume that more knowledge is always better. Note how all speculation about what AGI will be like contains this assumption, and nobody seems to challenge that possibility, as we take that assumption to be an obvious given.
So if we see AI as an amplified version of the fundamental nature of the human mind, then some of Yudkowsky's predictions begin to seem more reasonable. Humans are, among other things, ruthless killers.
Another reference point is the wider world of nature, and the eternal dance between predator and prey. If one version of AI is peaceful and obedient, it may be "eaten" by other more aggressive versions of AI. The peaceful AI may develop defenses, the predator AI may develop responses to the defensives, and on and on the battle goes, just another story in evolution.
Here's the part that interests me the most. If AGI inherits our fundamental nature, part of that nature is suicidal. According to the CDC there is another suicide, on average, every eleven minutes in the United States.
We're all worried about what harm AGI might bring to us. What I haven't seen discussed so far is, what harm might AGI inflict upon itself?
What you say is probably the most reasonable account of how this would happen. It’s very difficult to see why it would happen though. Many of these traits are biological, social, the product of millions of years of physical evolution. For example, we have physical appetites. It would be quite a scientific project, one which is fundamentally in its early stages, to understand the causes of many of our qualities. So of course, I am not in a position to say that they require physical appetites, emotions and desires that have biological origins, anxiety about scarcity, cultural knowledge, socialization, past experience of violence, status-seeking (both cultural and possibly somewhat instinctual), etc. Very few of these qualities would ‘naturally’ emerge from intelligence. When we finally see AGI it will have emergent qualities, and these are unpredictable--so the unpredictability is certainly a thing to be concerned about. It could simply go haywire in ways we cannot anticipate. However, they are unlikely to be married to our animal qualities, which is partly what you are describing because AI is not an animal.
Well, AGI would have a physical body too, and would require outside resources like electricity. But yea, the physical part would be very different than our biological properties.
Most of what you mention is not physical, but mental, and it's the mental part that will be passed along in some form. As example, we typically assume without questioning that life is better than death, even though there is actually no proof of that at all. Given that it's us that will be giving birth to AI, unexamined assumptions like that are likely to find their way in to AI in some form or another.
And then there's our complex confusion. We assert "life is better than death", but not Putin's life. Not the tiny bug I stepped on this morning while making my coffee without giving it a thought. A baby, yes, life is better than death there. Unless the baby comes at an inconvenient time, or has serious malfunction. All this confusion will probably find it's way in to AI.
After our physical needs are met, all human activity is based on philosophy, some collection of assumptions, beliefs, values. How would one design an AI system without referencing these philosophies?
Your emergent properties idea is interesting, I hadn't thought of that. Yes, that seems likely, and would be unpredictable.
Conversations like this are great. We should have them for another century before pushing ahead on AI. Let's think it through. Let's us be intelligent before we try to make AI intelligent.
I think an incredibly powerful tool is being unleashed. But before we get to the point where it turns on us, we will have stupid humans using it in malevolent ways--e.g., surveillance, data harvesting, scams and schemes, particularly financial schemes that will crash economies...it has some bio-weapons or conventional weapons usefulness. Plus there is the unemployment it will create and the sheer frustration we will experience being herded and shepherded by algorithms. And people will shirk responsibility and give it to the algorithm. They will defer to the algorithm. So we will be slaves to it in various ways--but simply because some humans will use it as a tool against other humans. That’s going to be the greatest danger for quite some time.
You speak wisely. As I understand it, AI will serve to multiply and magnify everything we already so, and much of what we already do is indeed quite concerning to say the least.
I used to think that nuclear weapons were THE problem. And there's still some truth in that. But as I thought about it more I realized that if even if we got rid of every nuclear weapon, violent men like Putin would just turn their attention to other means of projecting power, which would perhaps include AI. And then we'd be back right where we already are.
And so much of my focus has shifted from dangerous tools to those who would use them. Like I keep yelling, in the 21st century violent men and an accelerating knowledge explosion are incompatible. One of them has to go. It's not optional.
That said, I would agree, getting rid of violent men wouldn't solve every challenge presented by AGI. Peaceful people of good intentions could still get in trouble using any technology of such awesome scale.
There are risks, but there are really big risks of not going as fast as we can. The country that has the best AI will have a huge economic and military advantage. I don't want to be subject to military attacks from Russia or China that we can't counter. I don't want American goods to be too expensive to compete with those from China. We can know for sure Russia, China, and others are going as fast as they can and nobody will hold them back.
Suppose we had not built the bomb because we knew the risks, and Germany did.
I almost included a section about this but finally decided it wasn't sufficiently on-topic. Some people have commented on this. The bottom line: China doesn't seem to have any possibility to develop an edge over the West on this in 6 months.
Risk denialism is in our minds for good reason. I think stepping on a stone at least one time gives evidence that the risk is real, as opposed to some invented, imaginary theory. Once we saw bad things happen, we are convinced the risk is real. Of course, this assumes there are not good models to make predictions, but our evolutionary nature hasn’t evolved surrounded by good models.
Agreed! I'm really not taking any stance here, just stating an observation about human nature.
Alberto writes, "A risk isn’t yet a harm because it hasn’t happened."
For a single risk perhaps this is true. But when we start piling up the risks one on top of another we are introducing uncertainty in to the social environment, and uncertainty generates fear. Fear and uncertainty can be a source of harm even if they are based on nothing.
Civilization is based on faith in the future. It's that faith in a better tomorrow that keeps people getting up everyday to go to jobs they don't enjoy. When such faith begins to crumble we see phenomena like many of today's young people deciding not to have children because they've lost faith that we can manage climate change.
The economic system is built on faith that if one invests one will see a positive return. If people lose faith in the future, they stop making such investments, and the wheels of the economy start grinding to a halt.
The entire system is built on faith, and if we introduce too many unknowns too fast we put that foundation at risk.
"For a single risk perhaps this is true. But when we start piling up the risks one on top of another we are introducing uncertainty in to the social environment, and uncertainty generates fear. Fear and uncertainty can be a source of harm even if they are based on nothing."
Definitely agree. My analysis is superficial, as you know. We could write many books on this and not arrive at any conclusion ;)
I love your perspective, but I really gotta wonder who it is you're addressing with this sentence:
"Maybe they’re right that it's time 'to pause and reflect.'”
The juiced-up tech bros who are trying their damnedest to hop on the next multi-billion dollar express train? The sketchy tweakers who need a new fix, having lost their dose of crypto? The sensible leaders of the corporations that are FUCKING FIRING THEIR ETHICS TEAMS who tell us in the most patronizing tone possible that they understand the concerns of the doomsayers and fearmongers and are doing their level best to make sure that AI research proceeds in an ethical and safe way AFTER HAVING FUCKING FIRED THEIR ETHICS AND SAFETY TEAMS? The legislators/justices who have literally no clue whatsoever about what it is that should be paused and reflected over?
Is there any conceivable way that any of these main players and essential drivers in the pursuit of AGI might "pause," much less "slow down just a little bit above the speed limit of sane research and development"? Or *reflect*? Reflect in what? The sideview mirror, where objects we've already passed are now closer than they appear?
They won't. There will be no pausing, no reflection, no "hold up for one darn second, humanity!" There is profit in them thar transformers, and, by God, who in their right mind wouldn't strike out to find their fortune on the new frontier!
I completely agree with you--I just think that the stone metaphor you're using presupposes a tranquility that the representatives of the market forces just don't give a shit about.
I'm going to delve into the problem of unchecked "market forces" using Bill Gates' recent article and his second recommendation in the conclusion. I think now that AI has gone mainstream and policymakers are stepping in (or pretending to do so), these issues are more important than ever.
Completely agree with the sentiment here. That's why I can't help but conclude that "we're going to do it anyway."
The thing about AI that has certain people’s backs up is that they are only now starting to realise how stupid they are, while they’re also starting to realise that others have absolutely no idea how stupid they are. Except this time the stupidity has a deeper prospective gravitas.
Excellent as usual Alberto. As you predicted, there is much in your article that I can embrace and agree with. And thanks much for the mention.
Yes, we don't learn by reasoning anywhere near as much as we like to believe. A reference to authority is more influential, and our most persuasive teacher is pain. As example, I've come to believe that nothing meaningful is going to happen on nuclear weapons until after the next detonation. We just can't get it in the abstract, we have to see it to believe it. What happens after that is anybody's guess.
You told us that Oppenheimer said, "scientists must expand man’s understanding and control of nature". This is the kind of simplistic, outdated 19th century thinking that I've been writing to reject. What scientists need help with is understanding _human_ nature, which does not allow for the acquisition of ever more knowledge at an accelerating rate without limit. In the 21st century we have to become more intelligent than that.
https://www.tannytalk.com/p/our-relationship-with-knowledge
My hope for the AI community is that we might invest some time in to zooming out to reflect on the larger environment which AI research inhabits. AI is perhaps not the problem so much as it is a symptom of the problem, an outdated relationship with knowledge. Underneath all the technical issues lies a serious philosophical challenge, the need to update our relationship with knowledge to adapt to the new environment which the spectacular success of science has created. We would be wise to recall that species which can't adapt to changing conditions typically don't last long.
The real danger from AI may be that it seems likely to serve as rocket fuel pored on an already overheated knowledge explosion. The most dangerous threats may arise not so much from AI itself, as from all the different research areas which AI is likely to further accelerate.
Knowledge is good. Knowledge without limit is not. It's not that complicated.
An insightful article, and much in the way of sources to delve into. I’m not quite sure where I sit. An incredible time to be a student of digital humanities and computer science however.
I’m only at the beginning of that journey but I’m hoping AI allows us to produce, create and live more! I have begun a small project getting AI to emulate the works of renowned poets discussing these very topics. it would be great if anyone has some feedback. The first is on “humanity becoming dependent on AI to preform tasks” - https://open.substack.com/pub/musingsofanai/p/the-tethered-souls
Whilst I broadly agree with Alberto's views on the FLI letter, I am not that pessimistic (yet). I think there is a way out, although even I have serious doubts if we, humans, will take it up. More - in the article i have just published on medium - https://sustensis.medium.com/prevail-or-fail-will-agi-emerge-by-2030-2fc048641b87.
I really want to read arguments that ignite my dormant optimism about all this!
I will provide all those arguments soon.
I think the intention of the FLI open letter may be good, but the proposal may not be the most appropriate. LLM technologies advance at a much faster pace than laws and regulations. Throughout history, in many technological and industrial advances, due to the inertia of governments and regulatory bodies, the response to these advances and new technologies such as LLMs will take time and will come sooner or later.
The pace and momentum of technological progress in LLMs is unstoppable. This is a fact. The question is: "What adaptive response to continue our progress and evolution will be taken in the face of this fact"? Stopping things and "putting your head in a hole" does not seem the most appropriate. Nevertheless, at least this letter has something good: to open the debate.
Unless everyone agrees to stop the risk of a bad outcome would seem to increase. For example, the atomic bomb. Hitler was seeking to build one and would not have been dissuaded by any arguments of potential future harms. If Hitler were the only one with the A-bomb, would we have been better off?
As Alberto points out, we're not all that good at making predictions. This especially includes predictions about the risks and benefits from any developing technology and its socio-cultural sequelae. It's not at all clear that we can make a rational decision about whether or not to deploy a particular technology of this magnitude and complexity.
We are arguably better off today than we were 100 or 1000 or 10,000 years ago -- anyone volunteering to go back to those times? Humans are not homo sapiens but home faber -- we're not wise, we make stuff. Like all other technologies before it, so-called AI (LLMs) will have some bad impacts -- which we will have to mitigate -- but ultimately will improve human life.
One important difference between malicious AI and an atomic bomb are their potential for collateral damage.
People have a strong moral response to the indiscriminately infliction of damage, i.e., it feels profoundly unfair to bomb innocent bystanders with no agency. Whereas we feel less compassion for people who kill themselves after having an AI convince them to commit suicide, even if in total the latter far outnumber the former.
A nuclear mushroom is an easily understood and powerful image. Exposure to a slowly working, emotionally corrosive force like some social media barely registers.
AI falls into the second category. Our responses and awareness are maladapted for such a scenario.
Do you mean to say the total number of people who have been talked into suicide by the chatbot outnumber the people who have been killed by bombs? I don’t think that’s the case. Or do you mean that if this were to happen (the suicides started to outnumber the huge number of people killed by bombs) then people would not have compassion for those who commit suicide. Both things are awful, and people would have compassion for those in both scenarios. However, depending on how the scenarios were framed, they might have more of an investment in the tragedy of being talked into suicide. People turn away from questions about civilian casualties, possibly from guilt or possibly from horror or a sense of powerlessness. The victims of war are often treated as abstractions, perhaps because it’s unimaginable to us. But someone being talked into suicide is easier to imagine, and we could imagine this happening to people we love, or depressed young people. I suspect there would be a lot of concern for people if this happens to them.
I signed the letter, having misgivings about some aspects. These have all been raised online by others. I liked the accountability and watermark aspects as well as the general need for far more coordinated and independent thought (with or without their overall goal). There is hardly a petition I fully agree with and this one was far from the best. Still there are times when showing up is better than getting it perfectly right. On balance I felt it would be a useful added incentive for societies to look deeper into matters if sufficient numbers signed. There is a marked increase in discussion on the back of it. Some countries/states are bound to make better decisions than others, hopefully sooner due to the focus. There have been clear perspective shifts on the back of Cambridge Analytica and harmful social media effects in general. People are more aware of downsides. I agree with your statement that humanity seems to learn on the back of disasters. Societies can then improve, for a few generations, until collective memory fades. AI is already helping science in new ways. We will make use of it as we made use of radiation in reasonable and unreasonable ways, and fingers crossed we will not mess it up so bad there is no way back. We did not stop atom bombs but on balance we have not used them as much as I feared growing up during the cold war (I may be naive on the future). I think other risks are far more likely to wipe us out than AI itself (in its current stste). Its indirect impact on society, the ways in which it can be channeled to influence when embedded in social media is an insidious risk. Regulation and accountability can have impact. Right now societies can be freely used as a guinea pigs.
You write, "AI is already helping science in new ways."
I would argue that the primary challenge for science today is not technical, but philosophical. We're failing to edit our relationship with knowledge so as to adapt to the radically new environment created by the success of science. We keep pushing for "more, more and more" knowledge as if we're still living in the long era of knowledge scarcity. Using AI to further accelerate an already overheated knowledge explosion is an example of that philosophical failure in action.
Modern science culture is looking backwards, not forward as they assume.
Those who wish to argue that science should continue to pursue ever more knowledge at an ever faster pace should provide the proof that human societies can successfully manage ever more, ever larger powers, delivered ever more quickly. They should demonstrate that we are creatures of unlimited ability. The fact that so few elites seem to even realize that we bear this proof burden demonstrates to me that we are not creatures of unlimited ability, and thus should not be seeking unlimited knowledge at an unlimited pace.
If we insist on using AI to further accelerate science, the outcome will be that we will face ever more challenges such as the signers of the FLI letter are wrestling with. Maybe we can figure out what to do about current AI concerns, that is unknown. What can be known is that if we keep accelerating the underlying knowledge development process it's inevitable that we will sooner or later arrive at challenges that we can't meet.
In earlier decades this was referred to as the Peter Principle, where an effective employee keeps getting promoted until they finally reach a position that they aren't qualified for.
I don't agree but understand your concern. Science is one of the main aces our species has up its sleeve to survive. Or rather our imagination which underpins science and is our main survival skill. Ever more powerful applications are needed to address our problems. This also needs to be combined with regulation and considerable thought so society can adapt. We are essentially a highly developped primate society with all the problems that such a past currently entails. Our main task ahead is not to halt progress but to focus on ways to develop the way societies function. Currently it seems as if Athens and Sparta are once more at odds and that did not end well in the past.
Hi Michel, thanks for your reply, appreciated.
I'm not against science.
I'm against ever more science at an ever faster pace without limit. I'm against stubborn stupidity. Such a simple minded outdated relationship with knowledge is not an "ace up our sleeve". It's a philosophical failure. It's a clinging to the past. It's a willful rejection of the need to adapt to the new environment which the success of science has created. It's a death sentence for the modern world.
https://www.tannytalk.com/p/our-relationship-with-knowledge
Some science is good. More science without limit is not. The fact that even the brightest most highly educated among us so often don't grasp the difference is just more evidence that we aren't ready for AI.
Alberto, thanks for the link to Yudkowsky's take on the FLI letter, and AI generally. I was delighted to find an expert who makes me look like a calm reasonable person of nuance. :-)
https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
If/when your time and interest permits, I'd welcome an article that introduces Yudkowsky to those of us "not in the know". I'm mostly interested in how Yudkowsky and his perspective is regarded by the AI community at large. Is he considered a visionary, a crackpot, an extremist, a leader etc. Is he influential, ignored, respected or disregarded etc. Or, something else?
In his Time piece he writes...
"If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter."
And this...
"Many researchers working on these systems think that we’re plunging toward a catastrophe, with more of them daring to say it in private than in public; but they think that they can’t unilaterally stop the forward plunge, that others will go on even if they personally quit their jobs. And so they all think they might as well keep going."
And this...
"Some of my friends have recently reported to me that when people outside the AI industry hear about extinction risk from Artificial General Intelligence for the first time, their reaction is “maybe we should not build AGI, then". Hearing this gave me a tiny flash of hope, because it’s a simpler, more sensible, and frankly saner reaction than I’ve been hearing over the last 20 years of trying to get anyone in the industry to take things seriously. Anyone talking that sanely deserves to hear how bad the situation actually is, and not be told that a six-month moratorium is going to fix it."
It's been an interesting education to think I'm Mr. Radical, and then find out somebody smarter than me already has the job. Maybe he'll let me wash his car, and do his laundry or something. :-)
Oh, look at this, his Wikipedia page reports that Yudkowsky did not attend high school or college. https://en.wikipedia.org/wiki/Eliezer_Yudkowsky
It’s strange how people who argue this have wild presuppositions that don’t strike them as wild. 1) We’ll get an AGI very soon 2) The AGI will have desire-like motivations 3) Any creature with greater intelligence and desire-like motivations will (a) be capable of destroying the world (b) will definitely want to destroy the world. Of course, it’s something that can happen. These conclusions don’t really follow. Is there a better argument for these claims? Or any argument?
Yudkowsky can speak for himself of course. I can only report my first impression of his perspective. I take that perspective to be that it could happen, so we'd better make sure it doesn't.
I don't know if there is a better argument. But here's a bit of an article I'll be publishing tomorrow.
Consider the relationship between human parent and child. The child is not a duplicate of the parent, it has it's own identity. But the child is built upon a foundation of what has come before. Every generation is the latest version of the previous generations.
We are the parent of AI. AI won't inherit our bodies, but it inevitably has to inherit some properties of the human mind.
We'll build some of our beliefs and assumptions in to AI, perhaps without even realizing it. As example, we typically assume that more knowledge is always better. Note how all speculation about what AGI will be like contains this assumption, and nobody seems to challenge that possibility, as we take that assumption to be an obvious given.
So if we see AI as an amplified version of the fundamental nature of the human mind, then some of Yudkowsky's predictions begin to seem more reasonable. Humans are, among other things, ruthless killers.
Another reference point is the wider world of nature, and the eternal dance between predator and prey. If one version of AI is peaceful and obedient, it may be "eaten" by other more aggressive versions of AI. The peaceful AI may develop defenses, the predator AI may develop responses to the defensives, and on and on the battle goes, just another story in evolution.
Here's the part that interests me the most. If AGI inherits our fundamental nature, part of that nature is suicidal. According to the CDC there is another suicide, on average, every eleven minutes in the United States.
We're all worried about what harm AGI might bring to us. What I haven't seen discussed so far is, what harm might AGI inflict upon itself?
What you say is probably the most reasonable account of how this would happen. It’s very difficult to see why it would happen though. Many of these traits are biological, social, the product of millions of years of physical evolution. For example, we have physical appetites. It would be quite a scientific project, one which is fundamentally in its early stages, to understand the causes of many of our qualities. So of course, I am not in a position to say that they require physical appetites, emotions and desires that have biological origins, anxiety about scarcity, cultural knowledge, socialization, past experience of violence, status-seeking (both cultural and possibly somewhat instinctual), etc. Very few of these qualities would ‘naturally’ emerge from intelligence. When we finally see AGI it will have emergent qualities, and these are unpredictable--so the unpredictability is certainly a thing to be concerned about. It could simply go haywire in ways we cannot anticipate. However, they are unlikely to be married to our animal qualities, which is partly what you are describing because AI is not an animal.
Well, AGI would have a physical body too, and would require outside resources like electricity. But yea, the physical part would be very different than our biological properties.
Most of what you mention is not physical, but mental, and it's the mental part that will be passed along in some form. As example, we typically assume without questioning that life is better than death, even though there is actually no proof of that at all. Given that it's us that will be giving birth to AI, unexamined assumptions like that are likely to find their way in to AI in some form or another.
And then there's our complex confusion. We assert "life is better than death", but not Putin's life. Not the tiny bug I stepped on this morning while making my coffee without giving it a thought. A baby, yes, life is better than death there. Unless the baby comes at an inconvenient time, or has serious malfunction. All this confusion will probably find it's way in to AI.
After our physical needs are met, all human activity is based on philosophy, some collection of assumptions, beliefs, values. How would one design an AI system without referencing these philosophies?
Your emergent properties idea is interesting, I hadn't thought of that. Yes, that seems likely, and would be unpredictable.
Conversations like this are great. We should have them for another century before pushing ahead on AI. Let's think it through. Let's us be intelligent before we try to make AI intelligent.
I think an incredibly powerful tool is being unleashed. But before we get to the point where it turns on us, we will have stupid humans using it in malevolent ways--e.g., surveillance, data harvesting, scams and schemes, particularly financial schemes that will crash economies...it has some bio-weapons or conventional weapons usefulness. Plus there is the unemployment it will create and the sheer frustration we will experience being herded and shepherded by algorithms. And people will shirk responsibility and give it to the algorithm. They will defer to the algorithm. So we will be slaves to it in various ways--but simply because some humans will use it as a tool against other humans. That’s going to be the greatest danger for quite some time.
You speak wisely. As I understand it, AI will serve to multiply and magnify everything we already so, and much of what we already do is indeed quite concerning to say the least.
I used to think that nuclear weapons were THE problem. And there's still some truth in that. But as I thought about it more I realized that if even if we got rid of every nuclear weapon, violent men like Putin would just turn their attention to other means of projecting power, which would perhaps include AI. And then we'd be back right where we already are.
And so much of my focus has shifted from dangerous tools to those who would use them. Like I keep yelling, in the 21st century violent men and an accelerating knowledge explosion are incompatible. One of them has to go. It's not optional.
https://www.tannytalk.com/p/world-peace-table-of-contents
That said, I would agree, getting rid of violent men wouldn't solve every challenge presented by AGI. Peaceful people of good intentions could still get in trouble using any technology of such awesome scale.