The AI Empathy Crisis
Remember LaMDA and the Google engineer? That, but happening to millions of people
AI language models (LMs) have recently gotten so skilled as to be believable—even deceptive.
Not in the sense of intentionally fooling people, but in the sense of being capable of generating utterances that would make us imagine a mind behind the screen.
We—gullible humans with a tendency to anthropomorphize non-living objects—are the perfect victims of this trap.
As access to LMs becomes widespread, many will start doubting. Some will even claim certainty: “AI is alive and sentient.”
This powerful illusion, at scale, will be the beginning of the first AI empathy crisis.
From an isolated case to a full-blown crisis
The AI empathy crisis refers to a future stage of AI development in which we won’t have yet built sentient AIs (this isn't to imply we will eventually—I don't know) but we’ll have built AI so advanced that an increasingly large amount of people will believe it’s sentient or conscious.
Under the assumption that first, it's easier to create the appearance of sentience than true sentience, and second, the AI community is leading us to the former and not necessarily the latter, we can conclude that some time from now this will unavoidably happen.
The term “robot empathy crisis” was introduced in 2017 by scientist and science-fiction author David Brin. I've modified it slightly given the recent explosion of virtual AIs, but the idea remains the same:
“The first robotic empathy crisis is going to happen very soon … Within three to five years we will have entities either in the physical world or online who demand human empathy, who claim to be fully intelligent and claim to be enslaved beings, enslaved artificial intelligences, and who sob and demand their rights.”
As I advanced above, I see two complementary causes for this to happen: first, the best LMs are good enough to create a temporary illusion of sentience. Second, people are prone to believe what they perceive without further critical inquiry—in particular, we tend to assume AI is like humans (called the ELIZA effect).
Together, both conditions create a suitable setting. Indeed, as we all know, this has already happened; Lemoine and LaMDA’s story made the news for a whole two weeks in June.
That was an isolated case—largely rejected by the scientific community and public opinion alike—yet it happened.
We dismissed it as nonsense, but for Lemoine, LaMDA was “a person” chained to Google’s servers, deserving of the same rights as any of us. He truly believed it—even if in his capacity as a priest and not a scientist.
Now, let me ask you this: how much time do you think we have until it’s not one person but thousands, even millions, who start to doubt?
Universal, cheap, and easy access to future LaMDAs—that will have mastered the ability to hold the humanness mirage for longer—will create an unstoppable snowball effect.
We’ll have a full-blown AI empathy crisis.
People will form bonds with these systems, virtual or physical. Some will develop empathy for their inferior treatment and will claim they deserve rights, as Brin predicted. Others will turn to them for friendship. And yet others will develop amorous attachments. There are instances of all this happening now.
In 2017—when the Transformer wasn’t even a thing—Brin foresaw this crisis would begin in “three to five years.” We’re not there yet, but he may still nail it—we have more than a few reasons to believe we’re very close.
The breeding ground is ready
Generative AI is at an all-time high in terms of investor funding, user interest, research possibilities, ambitious companies, and useful products. We’re witnessing the birth of a whole new industry that will move billions of dollars.
GPT-3-based companies are counted by dozens. The number of image-centered generative applications grows so fast that even insiders have a hard time keeping up. Powerful companies like Google, Meta, or OpenAI won't slow down while others will jump on the bandwagon. And the tech is getting cheaper, faster, and better, which means it’ll be easily scalable.
This is the perfect breeding ground for such an AI empathy crisis. I’m confident generative AI will spark the flame.
And, within this space, LMs are the most likely culprits.
On the one hand, language is tightly entwined with theory of mind in humans: we ascribe humanness to something that writes well, but not necessarily to something that paints well—LaMDA could feel like a person, but not DALL·E.
On the other hand, LMs possess a set of user-relevant features, each of which fundamentally enhances both the ELIZA effect and the growth of this impressive wave of generative AI.
First, high quality. Current LMs are masters of the form of language (syntax and semantics). They’re versatile in terms of topics, style, and tone, and are improving their ability to keep a coherent facade over time.
Second, accessibility. Lemoine’s infatuation with LaMDA was only possible due to his privileged position as a Google engineer. This won’t be a requirement anymore. Soon, these tools will be integrated into smartphones and tablets—within reach for thousands of millions.
And third, easiness of use. The ultimate goal of companies and devs is to create no-code tools. If anyone with a modern device and internet connection can get the most out of this tech, the reach will be universal—a revolution at the level of the iPhone in the 2000s and social media in the 2010s.
High-quality tools that are accessible and easy to use are the key ingredient that was missing before but it's now present to trigger this global empathic phenomenon.
Until now, hype was the principal force moving people to believe AI was more advanced than it was (mainly through companies’ PR moves and news outlets’ exaggeration).
But the widespread of generative AI tools will allow people to construct that reality by themselves. Nothing is stronger—and harder to change—than that. We won’t have to believe (or disbelieve) anyone. We’ll have first-hand experience with these AIs—and the beliefs we'll form will be hardly destructible.
Debunking LaMDA’s viral sentience story was a matter of public discourse and a coherent narrative of dismissal. Because only Lemoine had access to it, we all were dependent on fragile second-hand testimony. Lemoine's truth was our truth, but second-hand counterpoints—or just a bit of individual critical thinking—were enough to wreck his “evidence”.
But how could we reject people’s first-hand experience? How could experts educate people about AI’s inner workings—and their non-aliveness—when they could simply prompt their way into a different illusory certainty?
A new kind of dystopia
The immediate consequence Brin anticipated was that people would go on the streets to demand rights for robots (or AIs, for that matter). People would be defending something that’s nonexistent and would stop looking at the true problem: the humans behind the machine.
In 2020, Abeba Birhane and Jelle van Dijk published a paper entitled “Robot Rights? Let's Talk about Human Welfare Instead.” They argued that we should not just “deny robots 'rights', but … deny that robots, as artifacts emerging out of and mediating human beings, are the kinds of things that could be granted rights in the first place.”
The key here is “emerging out of and mediating human beings [emphasis mine].” Companies and developers responsible for these AI systems could hide behind their apparent sentience—neglecting, even more, their moral imperative to be accountable for the harm of their creations.
As Timnit Gebru told Wired’s Khari Johnson:
“I don’t want to talk about sentient robots, because at all ends of the spectrum there are humans harming other humans.”
Those least versed on these emerging technologies will not just be the most vulnerable to believing an imagined reality, but also to suffer second-order harms: a company shutting down and removing the existence of a supposed best friend or lover, developing a deep mistrust toward anyone who didn’t share their point of view, or even fearing the future.
It could be the last straw of the deconstruction of an already shattered common reality.
Brin ended his talk with a question: “can we maintain a civilization?” And he answered: I don’t know the answer to that … but I think it’s… possible.”
I’ll leave it at that.
Wow, excellent article, truly. Applause from here!
What critics of your piece will most likely not see is how profound the need for connection is, and how many people aren't having that need met. All over the world there are millions to billions of people living by themselves in huge apartment buildings where nobody knows anybody else, and each person is an island unto themselves.
Relationships are built upon need, and where there is no human being available to meet the need, need will look elsewhere.
How do I know the author is not himself a robot? I don't. Do I care? Not really. I care that the article presented invites me to respond, deepening an illusion of connection. Who am I connecting with? I have no idea, probably no one. That's ok, the illusion is enough.
I care mostly that the article discusses a topic of great interest to me. If I live long enough my fate will likely be to spend all day talking to bots who will happily engage such preferred topics for as long as I want. And if any AI developer reading this is taking my order, I prefer blondes.
What's happening is that AI capable of successfully imitating human beings are coming online at a time when so many of us all over the world are increasingly detached from real human beings, and increasingly seeking connection in any form.
If a reader should require any further proof of these claims here it is. Carefully observe your relationship with your dog. Is your dog human? No. Do you care? No. Do you invest more time and emotion in to your dog than you do in to any of your neighbors?
Semi can’t wait for these empathetic AI-beings to emerge because in my mind it will validate the theory that the entire universe emerged from code.
Also, I am once again asking the world for a “one-click” AI copyeditor and proofreader that functions with 100% accuracy. Why hasn’t anyone made this?