How AI Can Help With the Loneliness Epidemic
AI can't solve an inherently social problem—but it can contribute positively
Welcome back! As one year ends and another begins, it’s time to restart TAB with another essay on the future of AI. Today we’re exploring a timely topic that some of you have suggested in the past: loneliness.
I’m lucky. For me, Christmas is a time of exciting anticipation. These two weeks, my days have been filled with family plans, meetings with friends, laughs, and good food—my definition of quality time. I have people around that I love and who love me and Christmas is always the perfect excuse to spend more time together.
But, while I was warming up my idea generation machine the other day, a realization struck me: The wholesomeness that permeates through my experience of Christmas isn’t by any means universal. Whereas these are times of happiness and belonging for me, other people feel the exact opposite—the asphyxiating presence that’s always lurking in the back of their minds leaves its burrow to make all the more evident the empty void of a lonely life.
This article is a dedication to those people. Let’s find out how AI may play a positive role to help reduce this silent epidemic of our modern times, and where it’s better that we rely on human-centered approaches.
I’ve followed developments around the idea of an “AI companion” for some time and nothing I’ve seen has brought down my skeptical stance. But, although I won’t say I have changed my mind significantly, I consider the problem serious enough to give AI-based approaches a thoughtful look and analyze not only their deficiencies (which is the easier part) but their virtues—under which circumstances it’s better to have an AI companion than nothing?
Language models as AI companions
ChatGPT, which OpenAI released a month ago, is the latest link in a long chain of AI developments in the language generation domain. Its intuitive chat interface and the improved guardrails that prevent it—for the most part—from generating outrageous responses puts in evidence that chatbot tech has matured notably.
Although its flaws give away the lack of a mind behind the screen—which may be a good thing after all—ChatGPT (and similar chatbots like Character.ai or LaMDA) displays such a mastery of the form of language that it isn’t hard to imagine it as the seed of the promising application of AI companionship.
I confess I'd have a hard time coming up with a scenario where a person would choose an AI system over another person to have a fulfilling conversation. Humans make better companions than AIs regardless of the latter’s sophistication. That’s a premise I think we all accept as true (even if we project the argument into the near-term future).
However, current estimates point to a widespread issue—aggravated by the COVID pandemic—that underscores the need for a different perspective: A lot of people simply don’t enjoy the possibility to feel human connection whenever they please (it gets much worse among the elderly).
For them, an AI companion isn’t the worse out of the two choices but may be the only one they have. It’s under this framing that I want to approach this conversation.
Besides generic chatbots like ChatGPT (which isn’t an AI companion but merely a popular research preview focused on studying AI alignment with language interfaces) there are other applications that attack this problem with targeted intention.
Replika is arguably the best-known example of this. Built by Luka (an AI startup based in San Francisco and Moscow), Replika is designed to be a customized AI friend—the “AI companion who cares,” as the company describes it. Eugenia Kuyda ideated Replika to be “especially helpful for people who are lonely, depressed, or have few social connections.” A boldly ambitious but worthwhile goal.
Not unlike ChatGPT, Replika began as an optimized version of GPT-3 with some additional features: conversation-oriented fine-tuning and a database of predefined scripts to improve response adequacy, user-controlled feedback and long-term memory to give Replika the appearance of a stable “personality”, and a set of peripheral modules (e.g. speech and vision) to complete a well-rounded look.
Like Samantha from Her or Joi from Blade Runner 2049, Replika’s creators wanted it to become an AI-powered solution to the often unfulfilled longing for human connection. Although nowhere close to being a perfect real-world version of its Hollywoodian counterparts, 10 million registered users and 85% user-reported well-being improvement (according to the company) suggest Replika works—at least partially.
If you go to the r/Replika community, you’ll see that Replika behaves just as expected: it unapologetically mixes wholesome responses and hilarious fails—which can be harmful under the wrong circumstances:
While it’s pretty clear Replika isn’t a finished product (or shouldn’t be), I think choosing the adequate framing to assess its value is key (i.e. it doesn’t work as a substitute for therapy but can provide entertainment).
In the same line but much more targeted to an audience with “clearer needs,” Intuition Robotics has developed ElliQ, an AI companion that aims at relieving the elderly from the sometimes inevitable loneliness and disconnection from the world that aging entails.
ElliQ, “the sidekick for healthier, happier aging” is personalized to hold conversations and improve motivation. According to the company, it comprises advanced features like proactivity and daily check-ins which are absent in previous generations of virtual assistants like Alexa or Siri.
Dor Skuler, CEO and co-founder of Intuition compares ElliQ to a friend: “[ElliQ] might learn of a favorite thing from you—a country, a food—and then recall it months later, giving you the same bonding feeling as a friend who references your long-ago comment.” The comparison is a stretch of ElliQ’s capabilities (as AI companies tend to do) but gets the point across.
Replika, ElliQ, etc. signal the emergence of a startup space solely focused on AI companions which, in turn, indicates that loneliness is a serious issue that requires deeper attention. The intention is good and the numbers and testimonies suggest there’s some success, but, as always, we shouldn’t let the optimism of what could be cover up the challenges that exist here and now.
Shadows of human connection
It’s safe to view ChatGPT, Replika, and ElliQ as interesting initial approximations that explore how AI may tackle the loneliness epidemic. Yet, except ChatGPT, they’re explicitly marketed as ready-to-use mass products.
The limitations of these systems are nevertheless apparent. Users should have a clear idea of how these AI companions work, what’s reasonable to do with them, and which healthy boundaries they may want to set up in order to not rely too much on a technology barely out of a research stage.
As most of you know already, if a company wants to build a state-of-the-art AI companion nowadays there’s only one reasonable choice to base it on: generative AI. Nothing else comes close unless you decide to trade off performance for reliability in which case you’d go right back to Alexa.
However, despite its undeniable untapped potential, generative AI has a fundamental flaw when it comes to chatbots: the technology that underlies those systems wasn’t designed with this functionality in mind. Luka (Replika) and Intuition Robotics (ElliQ) have simply taken the best option that’s available to them (gen AI) and tried to reframe it toward their desired goal (AI companions).
As an analogy, if I want to hit a nail and I don’t have a hammer at hand I may very well try a wrench. Could it do the job? Maybe, but results may vary. Using language models to simulate connection and companionship is similar. Why are we using generative AI models as wrenches to hit nails? Because that’s what we’ve got.
Transformer-based GPT-like language models aren’t designed to be truthful, factual, reliable, or coherent, among other things. They mix dumb mistakes and brilliant responses without an apparent pattern. They fail to depict a robust, stable, and well-defined “personality” and lack long-term memory. Their design objective—encoding patterns in human-generated text data—may not stretch that much.
Sometimes they perform well and we’re satisfied with the result but sometimes they don’t and we get frustrated. As robotics pioneer Rodney Brooks argues, we shouldn’t conflate performance with competence when it comes to AI (and less so language-based AI). ChatGPT, Replika, and ElliQ may show high levels of performance under the right conditions (i.e. well-crafted prompts), but they lack competence as providers of companionship in the way a human would—I may nail it with a wrench but the tool’s true competence lies elsewhere. If I miss the fault is mine.
This mismatch between performance and competence often degrades into the typical uncanny-valley-ness of close-to-human AI: you expect some specific behavior but the AI fails to deliver, as I showed in the above screenshot from r/Replika. One possibility to reduce this effect is to “downplay[] the human qualities” of these systems, as ElliQ designers did. Robotic voice filters, a non-human face, and an explicitly built-in robot-like sense of humor (e.g. “you’re making my processor overheat”) are all good features other companies should embrace.
A more worrisome aspect of AIs as companions is whether people may rely on these systems not as company and entertainment but as substitutes for therapy. In the same way that many people suffer from loneliness, many others (there’s partial overlapping) suffer from inaccessibility to professional mental health services. When you have no choice, anything may feel (not be) better than nothing.
While I accept there’s an upside to using AI companions, I think mental health belongs to a whole other category of complexity. Although both issues are inevitably entwined, the boundaries should be absolutely clear. I’m not sure how we could go about doing this correctly. The inherent unreliability of generative AI-based chatbots makes them unusable for such high-stakes problems—and in some cases, they can turn out to be much more harmful than doing nothing:
For now, I don’t think we’re at a point where we can confidently say AI companions will become mainstream. There are a lot of unknowns that may turn out to be harmful over the long term: People may eventually forget they’re interacting with an AI (think of Theodore from Her). Companies’ total control over people’s AI-human relationships may put them in a vulnerable position. People may delegate the effort to socialize with the AI, hindering their progress.
Where I see a silver lining
Despite the above section being full of skepticism, I believe there’s an optimistic case to be made in favor of the AI companion approach. The only reason I dedicated a whole section to shortcomings is to frame my position clearly: Even if AI companions remain essentially flawed and limited in scope, there are reasons to believe they can help with the problem of loneliness in a definite and measurable way.
I illustrated generative AI’s unsuitability to be the basis for a meaningful companion by making a weird parallelism between hammers and wrenches. Now, what if people are satisfied (even if partially) with hitting nails with a wrench?
Loneliness is a very broad problem. Some people are deeply depressed and in need of pharmacological therapy whereas others are simply feeling slightly disconnected from the world, socialize less often, or live through a temporary period of high introversion. An AI companion may have little to do in the former case, but there’s a lot of potential for milder cases. If we understand loneliness as a spectrum, we can see that AI may have a role to play.
Company, entertainment, killing time, motivation… all of those aspects that we need to go on with our lives can improve with an AI companion despite the obvious current—and possibly future—limitations I described above. The certainty that something is listening may suffice to help some people.
The best example of this is dogs. They can’t replace a human in terms of connection or profound conversations (although some people would swear they can), but they do the companionship role pretty well (even better than us!). ChatGPT can’t compare to a dog in this regard but this example illustrates that a high degree of humanness may not be required in a considerable amount of cases.
AI-powered chatbots won’t ever solve the mental health problems that loneliness entails—it’s simply an intrinsically human problem—but may provide support and companionship in some non-trivial sense, effectively improving the user’s life.
A final reflection on AI as a means to solve social problems
In any case, it’s important to remember that AI isn’t a substitute for the current resources and investment that governments and institutions devote to improving the well-being of lonely people. A social problem requires social solutions—at least partially.
But because those efforts are clearly insufficient, there’s nothing wrong with exploring this alternative venue as means to complement what already exists. There’s no conflict between them—as long as the framing is adequate and people are literate regarding the skills and shortcomings of AI, it can only bring upside to their lives.
When people’s well-being is on the line, a positive stance is a better approach than the critical view I usually hold. It’s easy to dismiss AI-based approaches to this problem from a position of not feeling the hardships that loneliness entails. In no way these approaches can—or will—be perfect, and it’s critical to study and research adverse effects, but it makes no sense to prioritize or emphasize deficiencies over potential improvements.
Studying AI companions’ shortcomings is more important (in my opinion, at least) but not more urgent than assessing the well-being improvements that people who decide to use them may feel as a result.
The population is profoundly, profoundly, diverse. I think it goes almost without saying that some fraction of it is going to have any reaction you can think of, very much including having a relationship with generative AI programs that deals positively (by their own testimony) with loneliness and mental illness. (Do not forget there were people who thought they were getting therapeutic results from Joseph Weizenbaum's ELIZA. ) We will soon start getting testimonies to this effect, and by "soon" I mean this year. Then the question will be, what do we think of these reports? Do we categorize the people making them in some way that allows us to say "only people like that" feel that way and what they say has no relevance for the rest of us? Or will we be more open-minded? Beats me, But I think I can guess what my position will be.
Interesting post. I wrote a bit about this here: https://davefriedman.substack.com/p/intimations-of-empathy-in-chatgpt . I found that ChatGPT seems to provide some explanation of how to empathize with others, given certain prompts. Intimations of empathy, if you will.