This essay is a good take. I think one reason these AIs are so disconcerting is that the future evolution of AI poses both potentially unbounded downside risk *and* potentially unbounded upside. There's no consensus on which outcome is more likely, and there won't be consensus for a while, which is also unsettling and makes people feel uncomfortable. It's easier to put things--and people--in a box, so the immediate reaction may be to try to do that.
There aren't many risks that fall into this uncertain/unbounded bucket (most risks are clearly asymmetric in one direction or another--with either the downside or upside outcome having higher likelihood--and many are also bounded in magnitude on the upside or downside or both).
"Don’t fall for the easy argument" is good advice. Here is a less simple argument. The system has a limited amount of knowledge that can be accommodated. The number of questions it can generate answers to is infinite. Therefore, most of the possible answers must certainly turn out to be based not on the knowledge gained in the process of training.
As to history, we can learn what we need to know about the future of AI by studying the readily available history of nuclear weapons. AI, like nukes, is a historic game changing technology first developed with the best of intentions, which will evolve in to some form of unacceptable threat.
We will obsess about the AI threat in the beginning, like we did in the 50s and 60s with nukes. And then, when no easy answers are found, we will proceed in to a pattern of ignoring and denial. We will comfort ourselves to sleep with the notion that "well, nothing too bad has happened so far" while the scale of the technology and threat grows and grows, marching steadily towards some type of game over event.
As to the label "Luddites", the irony here is so rich.
It is today's technologists who are clinging to the past, to a simplistic, outdated and increasingly dangerous 19th century "more is better" relationship with knowledge. They are so enamored of the science hero stories of previous centuries that they don't even know that they are clinging to the past.
It is today's technologists who are stubbornly refusing to make the shift from the knowledge scarcity environment of earlier centuries to the knowledge excess environment of today. It is today's technologists who refuse to learn the maturity skills which will be necessary for our survival as we go forward.
Today's AI technologists sincerely mean well, just as Robert Oppenheimer and those working with him on the Manhattan Project sincerely meant well. And like Oppenheimer and his team, they are ignorantly opening a pandora's box that their successors will have no idea how to close.
The “stochastic parrot” analogy doesn’t look good to me, because it implies that probabilities are calculated (stochastic means probabilistic), but LLMs don’t use explicit probabilities.
I’d better go with the “autocomplete on steroids” metaphor…
Not sure I understand Ramon. Stochastic means that something can be modelled with a random p distribution, not probabilistic in general. Also, LLMs use probabilities. The metaphor isn't wrong, it simply focuses too much on one aspect of the real object (all metaphors do this to some extent).
Got it. My confusion was that I was focusing on the weights inside the Deep Learning model (which are not probabilities), but once they are working, LLMs indeed use probabilities to try to predict the best completion.
I *am* a human and a stochastic parrot / prediction model, though. Granted, only a GPT-2 equivalent, maybe.
I told ChatGPT a not too complex topic (single topic I have some knowledge about), and told it we would be taking turns, generating only one word each.
When I typed in my word - I was screen-and-mic recording the session - I said the word I predicted ChatGPT to generate next, after my word, aloud before submitting my word. Then, I used OpenAI's Whisper to generate subtitles [of my predicted, vocalized words] and used the transcription on the screen recording video.
Turns out I archived >70% accuracy - including, but not limited to, near-deterministic cases dictated by grammatical rules.
I may not be able to pass an inverse Turing test to trick a human into believing I am an AI, because I easily stray off-course into multi-modality as per my humanity / Natural General Intelligence, but - I'd say that I am quite satisfied with my performance as a stochastic parrot, drawing fron mere implicit schemas of ChatGPT (because I am not an AI and can't just remember absolutely everything ChatGPT ever responded to me in an explicit manner).
Well, human behavior is stochastic in some sense, but there's no way to make the "parrot" part make sense for a human unless the behavior is intended. We can always access the meaning behind the words and produce them with intention of some kind. As soon as we can do that the "parrot" metaphor breaks apart. Even if we could, in some instances, simulate the property of "parrotism"--and we do when we repeat sentences we don't understand--the fact that we can go beyond that makes us superior to parrots, which automatically destroys the original intention of the metaphor (i.e. LMs are, at best, stochastic parrots, whereas humans go beyond that).
That's what I meant with regard to why "I may not be able to pass an inverse Turing test"; my remark, albeit a "true story", was more with regard to "The fourth existential insult to humanity" and why we shouldn't be so pissed / insulted all the time, maybe. More of a criticism of reactance of the general public (much more so than a criticism aimed at you - I actually find your articles to be quite balanced and backed by logical reasoning, actually).
I am mainly bothered with the fact that I feel like I "should not accept" (according so "society" [the mainstream media]) the fact that I am - to a small part that is not everything, I agree with you there - a stochastic parrot, and have inferior (vastly inferior!) memory to an LLM (but on the other hand, can apply that frail human memory in the sense of a NGI (natural general intelligence) - but, NO! I should rebel against being inferior, I should not accept it!
....Which makes sense, from a turbo-capitalism viewpoint: Atomized societies and YOU can make it if YOU want, but only if YOU (and nobody else at your side!) makes it to the top and YOU are better than everybody else. That's the capitalist promise - YOU can make it, if YOU try.
So, it makes a lot of sense to reject everything trying to be on par with you, or even surpass you. Greed and envy of others [human and machine alike], rather being ready to kill than letting something surpass you - that's the "human condition", the conditioning we're all trained to abide, no matter the cost.
Now, linking to a German language podcast is something I would've been hesitant to do a few years ago. But now I know, you could just grab a Whisper model from git (or at the API) if you wanted, and all the knowledge would be available to you, so, there's the (rather grim) elaboration of what I tried to, in a frail attempt, summarize & criticize.
They basically elaborate on how its neither the extreme left nor the extreme right is the true danger to a democracy, but the very "center-split" is the threat. The "I am not racist, but...." is, and the "I am not against new technology developments, but..." is.
This podcast is not about tech, but a rather general view of how the economy, the market, capitalism in itself, gives rise to "market shaped extremism", in which the worth of a human is decided by their net worth, and nothing else. When ingesting the entire information in said podcast, it's very easy to recognize how it links to seemingly "disconnected" issues such as AI innovation, or pandemic lockdowns, or the worth of elderly generations, etc. with ease. Reject anything that might be equally good as you, kick it in the face, kick it down, and be superior, the ME and THE I, that's all that matters, an inbuilt way to dismiss anything that may threaten my self-worth, which is solely determined by my net worth.
And, unfortunately (or, fortunately, in terms of "this will make sense to you, absolutely"), it's exactly "same difference" in the US vs. Germany. The sole difference is that in Germany, there is no second amendment - but everything else directly applies.
PS: Unfortunately, it's very easy to hit the hourly maximum requests at the API by engaging in this kind of turn-taking. Use with caution / when you don't need something else from the AI in the next hour.
This essay is a good take. I think one reason these AIs are so disconcerting is that the future evolution of AI poses both potentially unbounded downside risk *and* potentially unbounded upside. There's no consensus on which outcome is more likely, and there won't be consensus for a while, which is also unsettling and makes people feel uncomfortable. It's easier to put things--and people--in a box, so the immediate reaction may be to try to do that.
There aren't many risks that fall into this uncertain/unbounded bucket (most risks are clearly asymmetric in one direction or another--with either the downside or upside outcome having higher likelihood--and many are also bounded in magnitude on the upside or downside or both).
"Don’t fall for the easy argument" is good advice. Here is a less simple argument. The system has a limited amount of knowledge that can be accommodated. The number of questions it can generate answers to is infinite. Therefore, most of the possible answers must certainly turn out to be based not on the knowledge gained in the process of training.
Or maybe most of the answers turn out to be wrong.
As to history, we can learn what we need to know about the future of AI by studying the readily available history of nuclear weapons. AI, like nukes, is a historic game changing technology first developed with the best of intentions, which will evolve in to some form of unacceptable threat.
We will obsess about the AI threat in the beginning, like we did in the 50s and 60s with nukes. And then, when no easy answers are found, we will proceed in to a pattern of ignoring and denial. We will comfort ourselves to sleep with the notion that "well, nothing too bad has happened so far" while the scale of the technology and threat grows and grows, marching steadily towards some type of game over event.
As to the label "Luddites", the irony here is so rich.
It is today's technologists who are clinging to the past, to a simplistic, outdated and increasingly dangerous 19th century "more is better" relationship with knowledge. They are so enamored of the science hero stories of previous centuries that they don't even know that they are clinging to the past.
https://www.tannytalk.com/p/our-relationship-with-knowledge
It is today's technologists who are stubbornly refusing to make the shift from the knowledge scarcity environment of earlier centuries to the knowledge excess environment of today. It is today's technologists who refuse to learn the maturity skills which will be necessary for our survival as we go forward.
Today's AI technologists sincerely mean well, just as Robert Oppenheimer and those working with him on the Manhattan Project sincerely meant well. And like Oppenheimer and his team, they are ignorantly opening a pandora's box that their successors will have no idea how to close.
The “stochastic parrot” analogy doesn’t look good to me, because it implies that probabilities are calculated (stochastic means probabilistic), but LLMs don’t use explicit probabilities.
I’d better go with the “autocomplete on steroids” metaphor…
Not sure I understand Ramon. Stochastic means that something can be modelled with a random p distribution, not probabilistic in general. Also, LLMs use probabilities. The metaphor isn't wrong, it simply focuses too much on one aspect of the real object (all metaphors do this to some extent).
Got it. My confusion was that I was focusing on the weights inside the Deep Learning model (which are not probabilities), but once they are working, LLMs indeed use probabilities to try to predict the best completion.
Exactly!
I *am* a human and a stochastic parrot / prediction model, though. Granted, only a GPT-2 equivalent, maybe.
I told ChatGPT a not too complex topic (single topic I have some knowledge about), and told it we would be taking turns, generating only one word each.
When I typed in my word - I was screen-and-mic recording the session - I said the word I predicted ChatGPT to generate next, after my word, aloud before submitting my word. Then, I used OpenAI's Whisper to generate subtitles [of my predicted, vocalized words] and used the transcription on the screen recording video.
Turns out I archived >70% accuracy - including, but not limited to, near-deterministic cases dictated by grammatical rules.
I may not be able to pass an inverse Turing test to trick a human into believing I am an AI, because I easily stray off-course into multi-modality as per my humanity / Natural General Intelligence, but - I'd say that I am quite satisfied with my performance as a stochastic parrot, drawing fron mere implicit schemas of ChatGPT (because I am not an AI and can't just remember absolutely everything ChatGPT ever responded to me in an explicit manner).
Well, human behavior is stochastic in some sense, but there's no way to make the "parrot" part make sense for a human unless the behavior is intended. We can always access the meaning behind the words and produce them with intention of some kind. As soon as we can do that the "parrot" metaphor breaks apart. Even if we could, in some instances, simulate the property of "parrotism"--and we do when we repeat sentences we don't understand--the fact that we can go beyond that makes us superior to parrots, which automatically destroys the original intention of the metaphor (i.e. LMs are, at best, stochastic parrots, whereas humans go beyond that).
That's what I meant with regard to why "I may not be able to pass an inverse Turing test"; my remark, albeit a "true story", was more with regard to "The fourth existential insult to humanity" and why we shouldn't be so pissed / insulted all the time, maybe. More of a criticism of reactance of the general public (much more so than a criticism aimed at you - I actually find your articles to be quite balanced and backed by logical reasoning, actually).
I am mainly bothered with the fact that I feel like I "should not accept" (according so "society" [the mainstream media]) the fact that I am - to a small part that is not everything, I agree with you there - a stochastic parrot, and have inferior (vastly inferior!) memory to an LLM (but on the other hand, can apply that frail human memory in the sense of a NGI (natural general intelligence) - but, NO! I should rebel against being inferior, I should not accept it!
....Which makes sense, from a turbo-capitalism viewpoint: Atomized societies and YOU can make it if YOU want, but only if YOU (and nobody else at your side!) makes it to the top and YOU are better than everybody else. That's the capitalist promise - YOU can make it, if YOU try.
So, it makes a lot of sense to reject everything trying to be on par with you, or even surpass you. Greed and envy of others [human and machine alike], rather being ready to kill than letting something surpass you - that's the "human condition", the conditioning we're all trained to abide, no matter the cost.
Just six weeks ago, this podcast (independent media, DE) summarized it perfectly, much better than I could write it, with an abundance of cited sources and solid research backing it up: https://www.ndr.de/nachrichten/info/70-Marktfoermiger-Extremismus-Wie-tickt-die-Mitte-der-Gesellschaft,audio1300876.html
Now, linking to a German language podcast is something I would've been hesitant to do a few years ago. But now I know, you could just grab a Whisper model from git (or at the API) if you wanted, and all the knowledge would be available to you, so, there's the (rather grim) elaboration of what I tried to, in a frail attempt, summarize & criticize.
They basically elaborate on how its neither the extreme left nor the extreme right is the true danger to a democracy, but the very "center-split" is the threat. The "I am not racist, but...." is, and the "I am not against new technology developments, but..." is.
This podcast is not about tech, but a rather general view of how the economy, the market, capitalism in itself, gives rise to "market shaped extremism", in which the worth of a human is decided by their net worth, and nothing else. When ingesting the entire information in said podcast, it's very easy to recognize how it links to seemingly "disconnected" issues such as AI innovation, or pandemic lockdowns, or the worth of elderly generations, etc. with ease. Reject anything that might be equally good as you, kick it in the face, kick it down, and be superior, the ME and THE I, that's all that matters, an inbuilt way to dismiss anything that may threaten my self-worth, which is solely determined by my net worth.
And, unfortunately (or, fortunately, in terms of "this will make sense to you, absolutely"), it's exactly "same difference" in the US vs. Germany. The sole difference is that in Germany, there is no second amendment - but everything else directly applies.
/verbosity off. :-)
PS: Unfortunately, it's very easy to hit the hourly maximum requests at the API by engaging in this kind of turn-taking. Use with caution / when you don't need something else from the AI in the next hour.