You Can't Tech Away Loneliness
They will try to fix the unfixable and manage to break the unbreakable
Social problems require social solutions.
That's the most conventional take—bordering on platitude—against attempts from technology to solve social problems. It's true, though not because innovation is inherently at odds with the social-ness of things. The reason is—as it always is—Moloch. We'll get to that later.
Technologists insist anyway—sometimes dressed up with good intentions, sometimes dressed up in green—but can't move the needle. In a sense, the repeated failure of tech-bro approaches to figure out society as if it were a business plan compels me to agree with the standard take, so let's use it as a starting point: Social problems require social solutions.
It invites a few questions.
Why do would-be tech founders keep trying to mitigate these collective issues (that, ironically, often emerged out of other tech)?
Why does society at large remain unable to coordinate so we don't need or encourage those emphatically proposed but inevitably doomed entrepreneurial dabs?
Are the lives of the people affected better off with tech non-solutions or with unsolved pains?
Why do institutions and policymakers sit there, in deafening silence or under burying bureaucracy, instead of proactively collaborating with those to whom they're entrusting the future?
Sorry, I’m being intentionally snarky. But once I enumerate the many impotent potential solutions just in the generative AI space, the snarkiness will suddenly appear justified:
The Friend wearable (and similar devices like Rabbit R1 and the Humane pin).
AI friend/girlfriend startups like Replika and Character (the latter just got sued over the suicide of a 14-year-old kid).
An entire suite of uncanny-valley robots that simulate emotion (and some without the eyes).
The anthropomorphized GPT-4o voice mode (also in Google NotebookLM).
Looksmaxxing apps like Umax, which supposedly help you get hotter.
Adjacent tech-social spaces have tried as well, with some notable successes like Facebook or Tinder. Or Tamagotchi.
My unifying answer to those questions is simple: innovation is well aligned with the short-term financial motivations of tech founders—which pushes them to act—whereas pro-social policy isn't nearly as well aligned with politicians’ short-term partisan interests—which invites them to wait. But then, as we'll see, innovation is only conditionally aligned with pro-sociality. This creates a dangerous imbalance. Let's see how it unfolds.
An AI startup's goal isn't solving loneliness. That just happens to be a good by-product of making a lot of money with AI. They want to be rich and, in the meantime, why not try to put a patch on some social matter to earn a few reputation points?
Of course, it never works. Because the apparent alignment between pro-sociality and pro-fits is broken in the details (as it tends to happen when capitalistic practices meet the human side of humanity; they often make life faster and easier rather than better). A technology that really wanted to solve a social concern would need to become so dependent and focused on the intricacies of the human psyche and interpersonal relationships that it'd stop being a technology at all.
The Friend AI wearable could help ease the increasing lack of friendship if it were, well, a real-person type of friend instead of a necklace. But that's not a technology—it’s a person. And you can't (legally) make money selling those. Tinder could solve the dating market if it stopped being a swiping app and turned into a physical place to meet new people. But that's not a technology—it’s a bar. You can't have that on your phone. Character and Replika could better tackle the loneliness epidemic if they weren't addictive roleplaying chatbots but your two dogs—“Hey Chary, hey Plika, who's a good boy!” But dogs aren't technology—and they don’t reproduce according to the scaling laws.
Let’s not forget Facebook—the original tech-solution-for-social-problems project of the digital era. And its original sin.
Facebook was marketed as a new way to help with an old struggle: keeping up with friends and relatives. (Zuck wanted to get girls at uni actually, but it didn't work out.) To solve that, as it couldn’t be otherwise, he applied an antisocial approach: I'll consume all of your “time and conscious attention” so I make shitloads of money hopefully, you can keep in touch with your friends—who, hey, good news, happen to be as hooked as you are.
Immersed in such a brave quest, Facebook earned the “social network” title. Right after, Zuck stopped pursuing the “social” aspect of the business and focused on turning it into an impersonal super-network. He championed the “algorithmic feed” (a.k.a. the AI-powered feed) and the “like” button (Earth's heart broke that terrible day). The rebranded platform allowed you to find interests beyond your immediate circles. Except you should replace “allowed” with “enticed” and “find interests” with “stay engaged”.
As soon as Zuck found a bigger mine of gold, it swiftly dropped the pro-social part of his company, which had become an obstacle to his true goal. The social value of Facebook, always brittle but rather noticeable in the beginning, was only pursued insofar as it was instrumental in growing the business and making money.
So the only way to truly help improve social problems with technology is, first, paradoxically, to set aside any intention to build technology.
Because what people want is other people, not a talking collar. Tech founders eventually find out that creating a startup that pretends to be pro-social is much easier than actually enacting a pro-social policy.
And second, they have to set aside any intention to make money. Exactly. Tech-ling social problems won’t make you money. It costs money. If Facebook weren't selling you ads, which is not particularly pro-social, it'd be a money drain.
Okay, so who volunteers to innovate a non-tech, non-profit solution to this? Zuck isn't having it.
That's why the government—being dependent (or so I want to believe) on people's taxes and voting power (ha!), and free (or so I want to believe) from crazy inventors and vested financial interests (yikes)—is uniquely incentivized to attack this problem category. It's the institutional, national-wide, pro-social policies—not competitive tech innovation—that can do something here. I don't know, build parks and walkable streets. Fight obesity and drug dependence. Or even reward the business types that manage to successfully juggle shareholder greed with pro-social efforts.
But, as Moloch would have it (I promise you we'll meet him), the true goal of a political party is winning the elections.
The government, try as it might, is not so different from the growingly unpopular entrepreneurs. So we're back at the start: politicians' short-term selfish motivations are aligned with solving the social problem but only insofar as it earns them partisan points to win a popularity contest every four years.
While tech founders—who can't be as pro-social as they'd like because they need profits—are individually incentivized to act, pretending to be pro-social because they may win a PR prize in the process of making a big pile of cash, politicians—who can be as pro-social as they'd like because they “don't need” profits—are individually incentivized to wait to be pro-social. Until it's election time.
Before the election arrives, it's not worth it. As soon as the election passes, it's not worth it. It's only in the months-long window right before submitting the ballots that they suddenly get the urge to think about The People.
When the results come out, incentives change, lies are revealed, and we get angry. But we forget the grievances sometime in between the four years it takes to vote again because, in the meantime, we're deeply addicted to the technological replacements of the things we really wanted.
So politicians get their elections. Founders get their money. And we get a useless AI wearable or some alluring chatbot to get hooked in that we’re told replaces the pro-social policies we’d get if Moloch—bastard—didn’t get in the way.
Moloch, oh, Moloch. Who are you, responsible, guilty of so much misfortune?
Well, it is you. And me. It's us. Us, collectively suffering these social pains yet individually obsessing about ourselves, and our short-term desires. It’s those politicians and founders, caring only about their interests.
Moloch is the invisible demon that prevents us from coordinating to make the world into the better place we all want but aren't willing to sacrifice for in the intimacy of our personal privileges.1
In short: it is the eternal recurrence of humanity's race to the bottom. We get up and find a way, every time, to go down.
Then the cycle repeats.
The same forces that bring about the worst imaginable technologized version of society prevent us from receiving the better version we want of what we already have. Once you realize it is incentives all the way down, you also realize we're all merely eluding our duties by escaping forward.
I’m left wondering if this will ever be solvable, or if I’d be better off buying an AI necklace, smashing it to pieces, and hanging it on the wall to stare at in the stillness of night—reminding myself that, come morning, I should take a walk.
This is a terrible simplification that doesn't do any justice to the original concept; go read Scott Alexander’s essay.
Good piece, Alberto! Yes, social problems require social solutions.
It's not only that tech bros propose tech solutions, but also that they propose the worst possible solutions, as far as they can make a buck with them. In my view, AI companions is one of the worst possible AI applications you can imagine, because, as we know, AI could simulate emotions but doesn't have emotions.
There could be tech "helpers" to curb isolation, and I give you one simple example: some people eat alone in a restaurant but would like to share lunchtime food with others. But it's often too awkward to invite unknown people to join–especially if you are shy. But a "Sharelunch" app could notify you that another person, also user of Sharelunch, is eating nearby. Once they both get a notification, humans take over and connects with the other in person, not virtually. They have a lunch together and perhaps a new friendship starts.
This is not to give you a "pitch" about Sharelunch (which doesn't exist) but to make the point that AI is being used in the worst possible way. Other venues are possible, though I'm not sure they are profitable enough.
With Character AI, I’ve seen a lot of discourse that’s basically says “ban children from using it, but adults are mentally healthy and can use the platform”
I’m sorry — there’s nothing healthy about forming relationships with any form of AI