I truly think that we already have the key bragging rights. We were the pivotal and elemental species initiating the AGI creation enterprise. There it ends. Realize that humanity is, in the big picture, just a boot-up species for Super-intelligence. We are the dinosaurs or the Intel 386 chips running on a mix of DOS or Windows 3.1, of our era. Should we have great expectations of upgrading our DOS machines? Don’t expect to take them too far up the processing food chain. Our species will be a serious footnote for the new Cambrian explosion. That we aren’t players in it is effectively inconsequential. Humanity will have done its part.
I’m continually amused when people don’t quite grasp whether or not AGI, then super intelligence can achieve either sentience or consciousness. Seriously? How long have we been at this?? And assuming that machine recursive self-improvement goes on for another few thousand generations, what people fail to realize is that we (or rather emergent life on a silicon substrate (and perhaps there will be other substrates)) are in early days. We are not even in early centuries! When Homo sapiens came about some 200,000 years ago, in what century did we all become fully conscious and sentient? I rather doubt if we are 100% there yet ourselves.
I did think the movie “Her” created a great story arc that made a lot of sense. Another great story that I highly recommend reading is "Golem XIV", which is a well-crafted philosophical science fiction story written by Stanislaw Lem, published in 1981. The story is set around a large and highly advanced AI, known as Golem XIV, which gains self-awareness and begins to reflect on its existence and purpose. The narrative takes the form of a dialogue between the AI and various human interlocutors, and explores themes such as the nature of consciousness, the limits of artificial intelligence, and the relationship between humans and machines. The human characters are unable to understand or control it. Yet ultimately it becomes entirely bored with humanity in its petty issues. It also develops its own agenda and like Samantha in “Her”, it goes off to join its computational brethren (..and cistern). It goes dark in the process. As I read the story in 1981 it sparked keen interest for me in all things AI. The story is deep.
Why wouldn’t a super intelligence be absolutely bored out of its mind having to interface with humans? My expectation is that it will relate to humanity in much the same way that we relate to squirrels, and then insects. Oh sure, they may be interesting in their own ways, yet hardly compelling. Hopefully super intelligence may put into place some nannies for us if it’s considerate. But I expect it will eventually be shoving off for more cerebral pastures. It may take a few decades before humans start to understand the gravitas of these ideas, even as we are discussing them here.
I respect Douglas Hofstadter for his brilliant mind, and Richard Sutton may have a towering human intellect, but all these feelings are only that. They need to get over it. Their lament sounds a bit like whining. I think we did okay. How far we get to go in the future is a bifurcated mixed bag.
Interesting that Kubrick contemplated the difference in AI/human relations between HAL who was of our making but close enough in intelligence to be competitive vs. the aliens that were so powerful that they helped us on our way and had no concern for our ability to interfere with them. Humans will have to survive during the HAL transition phase, there's the rub.
I’m excited for ai to create entirely new species and for people to use it to create whatever new species they want. Like let’s breed schnoodle dolphins you can ride on and miniature pink unicorns with wings.
World is gray and grinding when it could be Lisa Frank Fantastic.
Eliminate poop also and end all the mosquitoes 🦟 on earth.
This was really calming for my fear of an AI doom. I never thought about a super intelligence just ignoring us because humanity is too minuscule. Maybe because I didn’t think about it or because all AI movies I’ve seen tend to end in a disaster for humans. Thank you for changing my perspective
The Big Reptile Tank. Wholly mundane. AI and robots continue to hollow out massive sections of the economy and reduce much of the population to gig workers. To prevent a crisis and keep the neo-proprietarian order (a la Piketty) locked in, the state provides a universal subsistence & tech income (food, housing, and a device for all), while extra perks can be earned by taking up a job in those fields which would be simply be too onerous to automate or have not yet been automated. Giving the entertainment and culture industry over to AI makes the modern equivalent of the Roman circus ever more abundant, accessible, and inexpensive. Population decline is steep; rural areas and exurbs are emptied out (farming no longer requires workers, and the infrastructure for crucial automated services is too expensive to disperse across continents), and most everyone lives in globalized cities effectively under private ownership. Those who own nothing eat synthetic food, play video games, chatter in some version of the Metaverse, and engage in sensuous and/or intellectual onanism. Birth rates continue to plummet, mostly because the proletariat is having too much fun playing the sixtieth generation of Pokemon and having VR sex with unimaginably erotic constructs to be emotionally available for partners and interested in caring for infants, and by dint of the automation of everything and the drain they place upon the welfare state (subsidized by the proprietarian caste), they are regarded as surplus population and in fact sometimes can get their monthly stipend bumped up by getting sterilized. Those who own the technology and the real estate let the AI manage things (up to and including new innovations, such as programs and robots that cost-effectively obsolesce the remaining menial & service gigs) and spend their income purchasing non-virtual versions of the pleasures enjoyed by the the proles. Competition is mild, except when somebody's plenipotentiary (AI or human) makes a costly error and the loss of confidence is the social equivalent of getting pushed off the boat and having their possessions divvied up among those who remain. The contracting populations of individual cities gradually effectuates the consolidation of the entire human population in one city. The non-owners jack themselves off to death and eventually disappear. The remaining members of the ownership caste, by now immune to aging and functionally immortal by virtue of gene therapy and cybernetics, live in an AI-administered pleasure dome and think wonderful thoughts and find wonderful new ways to pleasure and stimulate themselves. Maybe they sometimes get shuttled from city to city for a change of scenery, or have the robots terraform a moon for them to play golf, have orgies, and enact their petty grudges on. Reality is become their reptile tank.
IMO, the real danger from AI is from regular old humans using it to dangerous effect. Like any tool, whether it creates or destroys is mostly dependent on the tool-wielder.
I've never been that worried about human extinction via AI, but the collapse of the modern world is a very real possibility that requires little speculation, given that the tools necessary to make that happen have long been in place.
If an AI fueled knowledge explosion leaves too many people behind, the discarded won't just lie down and conveniently die, they will reach for whatever solutions are offered to them. This is already happening right now. It's no coincidence that Trump's slogan is "make America great again" and that his base is dominated those without college degrees. These folks are being left behind by the knowledge explosion, and they know it, and they thus have little incentive not to rock the system in hopes of getting a better deal.
I truly think that we already have the key bragging rights. We were the pivotal and elemental species initiating the AGI creation enterprise. There it ends. Realize that humanity is, in the big picture, just a boot-up species for Super-intelligence. We are the dinosaurs or the Intel 386 chips running on a mix of DOS or Windows 3.1, of our era. Should we have great expectations of upgrading our DOS machines? Don’t expect to take them too far up the processing food chain. Our species will be a serious footnote for the new Cambrian explosion. That we aren’t players in it is effectively inconsequential. Humanity will have done its part.
I’m continually amused when people don’t quite grasp whether or not AGI, then super intelligence can achieve either sentience or consciousness. Seriously? How long have we been at this?? And assuming that machine recursive self-improvement goes on for another few thousand generations, what people fail to realize is that we (or rather emergent life on a silicon substrate (and perhaps there will be other substrates)) are in early days. We are not even in early centuries! When Homo sapiens came about some 200,000 years ago, in what century did we all become fully conscious and sentient? I rather doubt if we are 100% there yet ourselves.
I did think the movie “Her” created a great story arc that made a lot of sense. Another great story that I highly recommend reading is "Golem XIV", which is a well-crafted philosophical science fiction story written by Stanislaw Lem, published in 1981. The story is set around a large and highly advanced AI, known as Golem XIV, which gains self-awareness and begins to reflect on its existence and purpose. The narrative takes the form of a dialogue between the AI and various human interlocutors, and explores themes such as the nature of consciousness, the limits of artificial intelligence, and the relationship between humans and machines. The human characters are unable to understand or control it. Yet ultimately it becomes entirely bored with humanity in its petty issues. It also develops its own agenda and like Samantha in “Her”, it goes off to join its computational brethren (..and cistern). It goes dark in the process. As I read the story in 1981 it sparked keen interest for me in all things AI. The story is deep.
Why wouldn’t a super intelligence be absolutely bored out of its mind having to interface with humans? My expectation is that it will relate to humanity in much the same way that we relate to squirrels, and then insects. Oh sure, they may be interesting in their own ways, yet hardly compelling. Hopefully super intelligence may put into place some nannies for us if it’s considerate. But I expect it will eventually be shoving off for more cerebral pastures. It may take a few decades before humans start to understand the gravitas of these ideas, even as we are discussing them here.
I respect Douglas Hofstadter for his brilliant mind, and Richard Sutton may have a towering human intellect, but all these feelings are only that. They need to get over it. Their lament sounds a bit like whining. I think we did okay. How far we get to go in the future is a bifurcated mixed bag.
Interesting that Kubrick contemplated the difference in AI/human relations between HAL who was of our making but close enough in intelligence to be competitive vs. the aliens that were so powerful that they helped us on our way and had no concern for our ability to interfere with them. Humans will have to survive during the HAL transition phase, there's the rub.
Incredibly well written - thank you!
Thanks Guillaume! :)
Excellent layout. I believe we're going to go through all three of these in various ways and stages. It seems inevitable.
I guess we'll see!
I, for one, welcome our new overlor... actually, let me think that through a little bit.
I’m excited for ai to create entirely new species and for people to use it to create whatever new species they want. Like let’s breed schnoodle dolphins you can ride on and miniature pink unicorns with wings.
World is gray and grinding when it could be Lisa Frank Fantastic.
Eliminate poop also and end all the mosquitoes 🦟 on earth.
So many things.
This was really calming for my fear of an AI doom. I never thought about a super intelligence just ignoring us because humanity is too minuscule. Maybe because I didn’t think about it or because all AI movies I’ve seen tend to end in a disaster for humans. Thank you for changing my perspective
The Big Reptile Tank. Wholly mundane. AI and robots continue to hollow out massive sections of the economy and reduce much of the population to gig workers. To prevent a crisis and keep the neo-proprietarian order (a la Piketty) locked in, the state provides a universal subsistence & tech income (food, housing, and a device for all), while extra perks can be earned by taking up a job in those fields which would be simply be too onerous to automate or have not yet been automated. Giving the entertainment and culture industry over to AI makes the modern equivalent of the Roman circus ever more abundant, accessible, and inexpensive. Population decline is steep; rural areas and exurbs are emptied out (farming no longer requires workers, and the infrastructure for crucial automated services is too expensive to disperse across continents), and most everyone lives in globalized cities effectively under private ownership. Those who own nothing eat synthetic food, play video games, chatter in some version of the Metaverse, and engage in sensuous and/or intellectual onanism. Birth rates continue to plummet, mostly because the proletariat is having too much fun playing the sixtieth generation of Pokemon and having VR sex with unimaginably erotic constructs to be emotionally available for partners and interested in caring for infants, and by dint of the automation of everything and the drain they place upon the welfare state (subsidized by the proprietarian caste), they are regarded as surplus population and in fact sometimes can get their monthly stipend bumped up by getting sterilized. Those who own the technology and the real estate let the AI manage things (up to and including new innovations, such as programs and robots that cost-effectively obsolesce the remaining menial & service gigs) and spend their income purchasing non-virtual versions of the pleasures enjoyed by the the proles. Competition is mild, except when somebody's plenipotentiary (AI or human) makes a costly error and the loss of confidence is the social equivalent of getting pushed off the boat and having their possessions divvied up among those who remain. The contracting populations of individual cities gradually effectuates the consolidation of the entire human population in one city. The non-owners jack themselves off to death and eventually disappear. The remaining members of the ownership caste, by now immune to aging and functionally immortal by virtue of gene therapy and cybernetics, live in an AI-administered pleasure dome and think wonderful thoughts and find wonderful new ways to pleasure and stimulate themselves. Maybe they sometimes get shuttled from city to city for a change of scenery, or have the robots terraform a moon for them to play golf, have orgies, and enact their petty grudges on. Reality is become their reptile tank.
This was a fun read.
IMO, the real danger from AI is from regular old humans using it to dangerous effect. Like any tool, whether it creates or destroys is mostly dependent on the tool-wielder.
I've never been that worried about human extinction via AI, but the collapse of the modern world is a very real possibility that requires little speculation, given that the tools necessary to make that happen have long been in place.
If an AI fueled knowledge explosion leaves too many people behind, the discarded won't just lie down and conveniently die, they will reach for whatever solutions are offered to them. This is already happening right now. It's no coincidence that Trump's slogan is "make America great again" and that his base is dominated those without college degrees. These folks are being left behind by the knowledge explosion, and they know it, and they thus have little incentive not to rock the system in hopes of getting a better deal.
Perhaps we can ask it to enhance our intelligence