Thanks Alberto or picking up on this. The story actually has an interesting additional angle to it: the human condition in pre-modern times was subject to the will of the gods, which could only be assumed through oracle and signs, never directly known. Enlightenment taught us to see for ourselves, and since then we have taken it for granted that we could know the world, and act according to knowledge. We now construct entities that will become more intelligent than us, that we cannot control (cf. Alfonseca et al. 2021 https://dx.doi.org/10.1613/jair.1.12202), and – as you point out – that we cannot even properly know. It is striking to realize that in this sense we will return to the pre-modern state. Modernity was but a phase.
"Modernity was but a phase." Loved this sentence. It surely feels like it. We knew nothing, then we began to know more and more, and eventually we may find out that not everything is for us to know.
It actually feels glorious. The return of wonder, the return of magic. Man once again humbled, once again knowing there are things it cannot understand, or surpass, or control.
It could be the end of the cult of rationality, which sounded nice in theory, but resulted in people being arrogant without actually being rational. That's how we get 'sciencism', 'trust the science' without ever reading a paper, and the papers themselves being unreproducible.
I fear cults/new religions will proliferate soon: some for and some against AI, all fueled by confusion, frustration, insecurity, and fear — which psychopaths are wont to leverage for material gain, status, etc.
It feels like the saying „we are a biological boot-loader for the next step on evolution’s ladder“ is more and more likely. I’d love to hear what you guys think about that.
Isn’t Sutton merely saying we should not try to model what humans do and instead use computation in machine learning. That seems sort of inevitable. You seem to believe that AI--because it will have problem-solving capacities that will far outstrip any human--will surpass humans in some mysterious way that makes us not ‘the masters, the rulers’ but ‘the spectators.’ It would be helpful to bring this all down to earth. What’s the causal story of how AI gets from here to there. You say they’ll become so complex our minds won’t be able to make sense of them. But already there are many complex systems no individual mind can grasp, and systems that we create but don’t necessarily control. Is the idea that AI is going to be putting many tendrils out there for use and we won’t be able to keep track of its use? Or is the idea that the computations won’t be intelligible to use, even if the results seem accurate to us? What seems very concerning about this post is not that you are pointing out we could create something highly complex and impactful whose effects are far beyond our ken. How many times have humans done this since the beginning of the industrial era? It’s the implication that ‘it’s a super intelligence, it’s amazing, it surpasses us, we are its subjects.’ This is a tremendously dangerous idea because at least so far the machines make mistakes frequently. We know this. They have biased algorithms, they can’t find mistakes, they hallucinate. Our judgement has to be the last word on whether or not what they are doing is sufficient or good or correct. It has to be because nobody else’s can be. So far, the machines do not have critical thinking faculties. But even if they did, what would possibly be the point of our slavishness to them? They don’t need anything. Should we do this because we admire what they can do? This would be like admiring an amazing washing machine if you have spent your life washing by hand. Should we do it because we need what information, knowledge, etc. they can bring? Yes, that is the only reason we should give the results of their computations priority of place. What ELSE would be the point? I take it this is some futurism vibe, some transhumanism going on here. Is this correct? My sense is you’re talking yourself into something. What’s funny to me is that, if you’re talking yourself into a thing based on sci fi, I can only imagine the outcome of the sci fi is somebody eventually wondering about why the humans began to cede power to the machines. Very rarely do the heroes of sci fi become the computers. Maybe there’s a reason for that! They don’t care about how great they are at what they do, and except the way all machines can be admired, admiring them as agents in advance before they even have agency (an agency that they don’t need, really and we so far don’t have any theory to explain why they would want it) seems like it may be a category mistake.
"You seem to believe that AI--because it will have problem-solving capacities that will far outstrip any human--will surpass humans in some mysterious way that makes us not ‘the masters, the rulers’ but ‘the spectators.’"
Not really my point. This article isn't about AGI or superintelligence. The systems I'm referring to don't need to be more intelligent than us. I don't think they will for a long time--but much sooner than that we'll lose our ability to understand them (not as individuals, but as humanity. So far, every system we've ever built can be understood/controlled in full if you select the appropriate group of people to do so. Modern AI doesn't meet that criterion). We may even fail to develop ways to assess with certainty if we've built or not an AGI. That truth may belong to the realm of mysteries.
The irrelevancy is felt more acutely for sure and still we must keep building in the knowledge that we may/will be outpaced at any time. About the bitter lesson in the original article: sheer computational power still must have rules built in, rules which we set. Deep learning works on principles humans supplied. True, the model needs to be as flexible as possible and the rules as general as possible in turn. But we won't get there by murky jumps alone (+ I don't believe in a singularity emerging from murkiness, not at this stage). We'll get there by learning from mistakes and by building better and with more knowledge. The biggest problem is the access-to-knowledge part. We may not be able to understand the box, but how will we know whether all hope is lost to understand it if we're not allowed to look inside? Meanwhile, people will continue to build in special knowledge and those tools will outperform murkier ones until the next wave, possibly. The problem of irrelevancy is not new: every scientist knows that the future means their work is likely to be outpaced, forgotten, disproven, or, at best, taken for granted and incorporated as a triviality in a larger whole. It is being part of a bridge that matters and the knowledge and perspective that comes with it.
Sutton never denied the usefulness or value of human knowledge, he simply stated that more computation eventually overshadows it. I have to agree with Sutton there but I can't deny that human knowledge is still more than necessary to build all engineering systems--including AI, ML or GPTs. This essay isn't so much an attempt to minimize the value of human knowledge than my way to rethink our role in understanding the world
Hmm it sounds a little like EDA being driven by synthesis tools. In that sense we can never compete with computation and technology advances on the back of earlier tech. Maybe I am missing the point. Your article refers to twitter reactions. Is the shakeup due to the new applications bigger on the software development side than expected? Surely it is mostly supportive still rather than in the driver seat? Trying to understand the gist of the article.
This is a truly breakthrough thinking, especially as it is supported by some evidence that we may be losing control over AI much earlier and in the way, which we perhaps had not envisaged. Thank you Alberto.
To those who advocate slowing down or shutting off AI research and development, I would say it is far too late to do that. We would have to go back to perhaps 19th century civilisation and then after several decades we might be in the same or worse situation. We would need to have a powerful World Government controlling every citizen. It might have been been possible just after 1945, when such a World Government was to be set up within six months! Read the UN history.
Anyway, it is all too late. In the current situation, any governmental or international control will at best be partial and at worst - partial and ineffective because of the methods applied. The only way to control AI is by becoming part of it. Transhuman AI Governors is the only way forward. It should be started right now. If you are interested, you can watch my video on this subject: https://www.youtube.com/watch?v=F3HzTi470Ac .
I wouldn't really like "a powerful World Government controlling every citizen." And I don't think it's too late to slow down or implement adequate regulation. That's, in my view, the best approach to solve this (which is more than anything a philosophical pondering, not very useful) and all the other more tangible problems that surround AI: misinformation, bias and discrimination, power centralization, security issues, privacy concerns, lack of transparency, non-accountability, lack of data governance, and even those existential risks that seem so urgent for some and distracting sci-fi tales for others.
Thanks, Alberto. It seems that a difference between yours and my view may be in the 'event horizon'. I am one of those who assume that AGI will emerge by 2030 and your article provides further arguments for that. Therefore, I consider the sacrosanct values that we have held for quite some time, such as freedom or sovereignty as very relative now in the context of the most important value - LIFE and the survival of our species, or rather maintaining control over its evolution.
If we agree on that, then consequently we must see all efforts of AI regulation as dismal, just tinkering on the edges. Governments approach the task of introducing the necessary changes as the world had been still changing at a linear pace. If AI, genetics, material science, etc change at a nearly exponential pace, then to catch up with AI the governments and international organizations should abandon anachronistic procedures and try to adapt at a similar pace. That is of course impossible.
Therefore, the only chance is that it is the AI sector itself, which may have the keys for an effective AI control. It should follow an excellent example of how the control of the Internet has been maintained for over 30 years by non-governmental organisations within the Internet's W3C Consortium. I cover this subject in my latest video: https://www.youtube.com/watch?v=F3HzTi470Ac
Thanks, Phil. I cannot agree more with your view - see my comment above. But try to convince politicians about it. Why should these people with their short-term view of the next election were to sacrifice their position for something they don't understand and perhaps prefer not to understand because the answer is just overwhelming. So the orchestra abord the Titanic keeps playing on...
By principle. I stopped reading articles on GPT. But I’m glad I read this one. An amazing story. Makes me think about the book of Marcus van der Erve - AI God arising. Which thoughtfully describes how compute power is just a substrate for AI, and how it could start emerging in ways we don’t yet fully understand. For me AI is becoming a new form of faith. And I’m in constant superposition between a true atheist and a believer in its potential for future ‘mystical’ powers.
Your text is really insightful. I'm among people that celebrate those advances with euphoria, but I feel this bitterness. Somehow a lot of people is already irrelevant for this system, but soon all of us will be the same. All this passivity, facing AI and other human issues is unbelievable. Even our imagination is already taken by this dark future ruled by the Machine God. We really need to free ourselves as soon as possible.
Hi this is my first comment on the subject of Chat-GPT. I’m don’t a software engineer but a citizen geospatial multidimensional space and place scientist.
And with the help of CHAT_GPT, I have envisioned not who, but what will help human species to understand the inner workings of the other quicker?.... Humans of AI or AI of humans .
What we are looking at is a combination of both translated as Transhuman Centric Assistants.
Our use of mobile devices with both front and rear facing camera's. Provide the eyes to physical and invisible spaces. They are the gateway to multi dimensional spaces and planes.
The mobile is the only device that has the ability to take instructions from AI, that utilises computer vision, neural and node networks. To present parallel artificial and natural world information through multi agent principles.
And it’s is the digital twin of human species.
This will enable humans to communicate on both poetic and cognitive real world human prefrontal intelligence.
There will be a generation that will have their own transhuman digital twin extensions applied to mobile device.
The transhuman lives inside technologies and cannot exist outside of its host. It’s only interaction with the outside world is through CCTV and Audio , and any digital device with a camera, speaker, microphone and most importantly a human or humans.
One possible version of the “Spectator” role is the human that receives all of the benefits of a benign AI without any of the traditional costs e.g. economic, environmental, energy demands, mental health, etc. I hesitate to add social to that list as that could go either way. We could either find so much time on our hands that we do nothing but over share and argue. Alternatively, maybe we’d find that most of our anxiety is caused by stress over problems that are now solved and we get along fine. (As long as I’m being Polly Anna about it.)
Frankly, I suspect long standing issues that are outside of AI’s reach such as religion and racial tension might rise to the top. The outcome could easily be a swift and apocalyptic end.
I think I’d look to the traditional science fiction for possible scenarios in which to aim.
This strikes me as yet another step like that of the Copernican Revolution in which humans find ourselves getting knocked down a peg in 'specialness.' We lost our special place at the center of the solar system, but got over that by assuming we're still the highest form of intelligence on Earth and possibly in all of the Universe. God made us in his image, and just happened to place us in a perfectly ordinary distant arm of a totally ordinary galaxy that lacks any real distinction over any others that we can see. Now our place as the only highly intelligent and conscious being on our own world is threatened - at least the intelligent part - and the consciousness seems just a matter of time. No wonder people are freaked out.
Perhaps we need to hasten the acceptance of our ordinariness as just a particularly complex animal, no more distinctive over the 'lesser' animals we share the Earth with than that we are more complex and capable of building greater artifacts, and that we are fulfilling our destiny as the creators of the artifacts that will transcend our complexity and power eventually. We could feel pride in that if we so chose. Instead we seem to quake in fear at our next demotion.
Thanks Alberto or picking up on this. The story actually has an interesting additional angle to it: the human condition in pre-modern times was subject to the will of the gods, which could only be assumed through oracle and signs, never directly known. Enlightenment taught us to see for ourselves, and since then we have taken it for granted that we could know the world, and act according to knowledge. We now construct entities that will become more intelligent than us, that we cannot control (cf. Alfonseca et al. 2021 https://dx.doi.org/10.1613/jair.1.12202), and – as you point out – that we cannot even properly know. It is striking to realize that in this sense we will return to the pre-modern state. Modernity was but a phase.
"Modernity was but a phase." Loved this sentence. It surely feels like it. We knew nothing, then we began to know more and more, and eventually we may find out that not everything is for us to know.
It actually feels glorious. The return of wonder, the return of magic. Man once again humbled, once again knowing there are things it cannot understand, or surpass, or control.
It could be the end of the cult of rationality, which sounded nice in theory, but resulted in people being arrogant without actually being rational. That's how we get 'sciencism', 'trust the science' without ever reading a paper, and the papers themselves being unreproducible.
I fear cults/new religions will proliferate soon: some for and some against AI, all fueled by confusion, frustration, insecurity, and fear — which psychopaths are wont to leverage for material gain, status, etc.
It feels like the saying „we are a biological boot-loader for the next step on evolution’s ladder“ is more and more likely. I’d love to hear what you guys think about that.
I see some truth in that, for sure (I'm not an advocate of transhumanism, though)
Who will understand the inner workings of the other quicker?.... Humans of the AI or AI of the humans?
Thoughtful piece. I think the advances will be exciting in the near term and more confusing in the future.
Good question!
Isn’t Sutton merely saying we should not try to model what humans do and instead use computation in machine learning. That seems sort of inevitable. You seem to believe that AI--because it will have problem-solving capacities that will far outstrip any human--will surpass humans in some mysterious way that makes us not ‘the masters, the rulers’ but ‘the spectators.’ It would be helpful to bring this all down to earth. What’s the causal story of how AI gets from here to there. You say they’ll become so complex our minds won’t be able to make sense of them. But already there are many complex systems no individual mind can grasp, and systems that we create but don’t necessarily control. Is the idea that AI is going to be putting many tendrils out there for use and we won’t be able to keep track of its use? Or is the idea that the computations won’t be intelligible to use, even if the results seem accurate to us? What seems very concerning about this post is not that you are pointing out we could create something highly complex and impactful whose effects are far beyond our ken. How many times have humans done this since the beginning of the industrial era? It’s the implication that ‘it’s a super intelligence, it’s amazing, it surpasses us, we are its subjects.’ This is a tremendously dangerous idea because at least so far the machines make mistakes frequently. We know this. They have biased algorithms, they can’t find mistakes, they hallucinate. Our judgement has to be the last word on whether or not what they are doing is sufficient or good or correct. It has to be because nobody else’s can be. So far, the machines do not have critical thinking faculties. But even if they did, what would possibly be the point of our slavishness to them? They don’t need anything. Should we do this because we admire what they can do? This would be like admiring an amazing washing machine if you have spent your life washing by hand. Should we do it because we need what information, knowledge, etc. they can bring? Yes, that is the only reason we should give the results of their computations priority of place. What ELSE would be the point? I take it this is some futurism vibe, some transhumanism going on here. Is this correct? My sense is you’re talking yourself into something. What’s funny to me is that, if you’re talking yourself into a thing based on sci fi, I can only imagine the outcome of the sci fi is somebody eventually wondering about why the humans began to cede power to the machines. Very rarely do the heroes of sci fi become the computers. Maybe there’s a reason for that! They don’t care about how great they are at what they do, and except the way all machines can be admired, admiring them as agents in advance before they even have agency (an agency that they don’t need, really and we so far don’t have any theory to explain why they would want it) seems like it may be a category mistake.
"You seem to believe that AI--because it will have problem-solving capacities that will far outstrip any human--will surpass humans in some mysterious way that makes us not ‘the masters, the rulers’ but ‘the spectators.’"
Not really my point. This article isn't about AGI or superintelligence. The systems I'm referring to don't need to be more intelligent than us. I don't think they will for a long time--but much sooner than that we'll lose our ability to understand them (not as individuals, but as humanity. So far, every system we've ever built can be understood/controlled in full if you select the appropriate group of people to do so. Modern AI doesn't meet that criterion). We may even fail to develop ways to assess with certainty if we've built or not an AGI. That truth may belong to the realm of mysteries.
Yes! Then I agree this is quite possible! Sorry for misunderstanding your point…
The irrelevancy is felt more acutely for sure and still we must keep building in the knowledge that we may/will be outpaced at any time. About the bitter lesson in the original article: sheer computational power still must have rules built in, rules which we set. Deep learning works on principles humans supplied. True, the model needs to be as flexible as possible and the rules as general as possible in turn. But we won't get there by murky jumps alone (+ I don't believe in a singularity emerging from murkiness, not at this stage). We'll get there by learning from mistakes and by building better and with more knowledge. The biggest problem is the access-to-knowledge part. We may not be able to understand the box, but how will we know whether all hope is lost to understand it if we're not allowed to look inside? Meanwhile, people will continue to build in special knowledge and those tools will outperform murkier ones until the next wave, possibly. The problem of irrelevancy is not new: every scientist knows that the future means their work is likely to be outpaced, forgotten, disproven, or, at best, taken for granted and incorporated as a triviality in a larger whole. It is being part of a bridge that matters and the knowledge and perspective that comes with it.
Sutton never denied the usefulness or value of human knowledge, he simply stated that more computation eventually overshadows it. I have to agree with Sutton there but I can't deny that human knowledge is still more than necessary to build all engineering systems--including AI, ML or GPTs. This essay isn't so much an attempt to minimize the value of human knowledge than my way to rethink our role in understanding the world
Hmm it sounds a little like EDA being driven by synthesis tools. In that sense we can never compete with computation and technology advances on the back of earlier tech. Maybe I am missing the point. Your article refers to twitter reactions. Is the shakeup due to the new applications bigger on the software development side than expected? Surely it is mostly supportive still rather than in the driver seat? Trying to understand the gist of the article.
I have struggled documenting my feelings and thoughts on this matter. Thank you for accomplishing both.
Thanks Brooks :)
This is a truly breakthrough thinking, especially as it is supported by some evidence that we may be losing control over AI much earlier and in the way, which we perhaps had not envisaged. Thank you Alberto.
To those who advocate slowing down or shutting off AI research and development, I would say it is far too late to do that. We would have to go back to perhaps 19th century civilisation and then after several decades we might be in the same or worse situation. We would need to have a powerful World Government controlling every citizen. It might have been been possible just after 1945, when such a World Government was to be set up within six months! Read the UN history.
Anyway, it is all too late. In the current situation, any governmental or international control will at best be partial and at worst - partial and ineffective because of the methods applied. The only way to control AI is by becoming part of it. Transhuman AI Governors is the only way forward. It should be started right now. If you are interested, you can watch my video on this subject: https://www.youtube.com/watch?v=F3HzTi470Ac .
I wouldn't really like "a powerful World Government controlling every citizen." And I don't think it's too late to slow down or implement adequate regulation. That's, in my view, the best approach to solve this (which is more than anything a philosophical pondering, not very useful) and all the other more tangible problems that surround AI: misinformation, bias and discrimination, power centralization, security issues, privacy concerns, lack of transparency, non-accountability, lack of data governance, and even those existential risks that seem so urgent for some and distracting sci-fi tales for others.
Thanks, Alberto. It seems that a difference between yours and my view may be in the 'event horizon'. I am one of those who assume that AGI will emerge by 2030 and your article provides further arguments for that. Therefore, I consider the sacrosanct values that we have held for quite some time, such as freedom or sovereignty as very relative now in the context of the most important value - LIFE and the survival of our species, or rather maintaining control over its evolution.
If we agree on that, then consequently we must see all efforts of AI regulation as dismal, just tinkering on the edges. Governments approach the task of introducing the necessary changes as the world had been still changing at a linear pace. If AI, genetics, material science, etc change at a nearly exponential pace, then to catch up with AI the governments and international organizations should abandon anachronistic procedures and try to adapt at a similar pace. That is of course impossible.
Therefore, the only chance is that it is the AI sector itself, which may have the keys for an effective AI control. It should follow an excellent example of how the control of the Internet has been maintained for over 30 years by non-governmental organisations within the Internet's W3C Consortium. I cover this subject in my latest video: https://www.youtube.com/watch?v=F3HzTi470Ac
Thanks, Phil. I cannot agree more with your view - see my comment above. But try to convince politicians about it. Why should these people with their short-term view of the next election were to sacrifice their position for something they don't understand and perhaps prefer not to understand because the answer is just overwhelming. So the orchestra abord the Titanic keeps playing on...
The eternal paternal dilemma of letting go of the power for our children. Let go, be the dust. Wait for the next cycle.
very cool writing, congrats.
I am thinking in a similar direction:
https://medium.com/@jochen.sautter/will-agi-be-conscious-4aed006ecea5
By principle. I stopped reading articles on GPT. But I’m glad I read this one. An amazing story. Makes me think about the book of Marcus van der Erve - AI God arising. Which thoughtfully describes how compute power is just a substrate for AI, and how it could start emerging in ways we don’t yet fully understand. For me AI is becoming a new form of faith. And I’m in constant superposition between a true atheist and a believer in its potential for future ‘mystical’ powers.
https://medium.com/socyc/ai-god-arising-bb1e243f9f7b
It's great
Your text is really insightful. I'm among people that celebrate those advances with euphoria, but I feel this bitterness. Somehow a lot of people is already irrelevant for this system, but soon all of us will be the same. All this passivity, facing AI and other human issues is unbelievable. Even our imagination is already taken by this dark future ruled by the Machine God. We really need to free ourselves as soon as possible.
Hi this is my first comment on the subject of Chat-GPT. I’m don’t a software engineer but a citizen geospatial multidimensional space and place scientist.
And with the help of CHAT_GPT, I have envisioned not who, but what will help human species to understand the inner workings of the other quicker?.... Humans of AI or AI of humans .
What we are looking at is a combination of both translated as Transhuman Centric Assistants.
Our use of mobile devices with both front and rear facing camera's. Provide the eyes to physical and invisible spaces. They are the gateway to multi dimensional spaces and planes.
The mobile is the only device that has the ability to take instructions from AI, that utilises computer vision, neural and node networks. To present parallel artificial and natural world information through multi agent principles.
And it’s is the digital twin of human species.
This will enable humans to communicate on both poetic and cognitive real world human prefrontal intelligence.
There will be a generation that will have their own transhuman digital twin extensions applied to mobile device.
The transhuman lives inside technologies and cannot exist outside of its host. It’s only interaction with the outside world is through CCTV and Audio , and any digital device with a camera, speaker, microphone and most importantly a human or humans.
I welcome your feedback
Netzero007
One possible version of the “Spectator” role is the human that receives all of the benefits of a benign AI without any of the traditional costs e.g. economic, environmental, energy demands, mental health, etc. I hesitate to add social to that list as that could go either way. We could either find so much time on our hands that we do nothing but over share and argue. Alternatively, maybe we’d find that most of our anxiety is caused by stress over problems that are now solved and we get along fine. (As long as I’m being Polly Anna about it.)
Frankly, I suspect long standing issues that are outside of AI’s reach such as religion and racial tension might rise to the top. The outcome could easily be a swift and apocalyptic end.
I think I’d look to the traditional science fiction for possible scenarios in which to aim.
-jgp
The innovation of the Transformer Architecture literally disproves this.
What?
This strikes me as yet another step like that of the Copernican Revolution in which humans find ourselves getting knocked down a peg in 'specialness.' We lost our special place at the center of the solar system, but got over that by assuming we're still the highest form of intelligence on Earth and possibly in all of the Universe. God made us in his image, and just happened to place us in a perfectly ordinary distant arm of a totally ordinary galaxy that lacks any real distinction over any others that we can see. Now our place as the only highly intelligent and conscious being on our own world is threatened - at least the intelligent part - and the consciousness seems just a matter of time. No wonder people are freaked out.
Perhaps we need to hasten the acceptance of our ordinariness as just a particularly complex animal, no more distinctive over the 'lesser' animals we share the Earth with than that we are more complex and capable of building greater artifacts, and that we are fulfilling our destiny as the creators of the artifacts that will transcend our complexity and power eventually. We could feel pride in that if we so chose. Instead we seem to quake in fear at our next demotion.