How OpenAI Plans to Capitalize on Your Growing Loneliness With AI Companions
People are severely lacking a good friend
I.
Let me break the terrible news:
Self-reported loneliness is about to plummet. Emotional well-being? Skyrocketing. People will feel radiant, reborn, in love with life. Phones will lose the grip they never should've had. Friendship will be cherished. Revered. Idolized. Worshiped. Deified.
“Okay, hold on. What’s with the cult vibes? And how is that terrible? You just described what sounds like a utopia. Actually—scratch that—it is utopian. The only way I see phones and loneliness vanishing at the same time is if we all die in a nuclear blast or something.”
Well, my imaginary friend, that’s the point. Not a military catastrophe, though, but a spiritual one. That’s why it’s terrible.
Or—I think it is.
I am still absorbing the news.
I am… still deciding.
Is it so terrible?
Let me explain.
II.
The modern world has created and perpetuates a malaise. We’re living through the worst pandemic in the history of humankind: The Antisocial Century (glad journalist Derek Thompson didn't coin it “Millennium”).
People are not having sex.
They are not hanging out.
They’re lonely.
They’re playing video games, watching porn, and scrolling social media.
They’re more anxious and depressed than ever.
They worship nothing except themselves.
They are doing nothing.
(When I originally wrote this list, I added “young” before “people,” but I've since changed my mind. We are all suffering from this.)
To me, a former young person and former video game addict—how many man-hours has League of Legends devoured at this point?—this is terrible. To profit-seeking companies, this is… a brand-new market awaiting to be saturated with products whose fit has already been established by our yearning souls. Ha! Blame yourself.
Products that, contrary to what anyone would respond if you asked them, will succeed. We will buy them. We will use them. We will abuse them. And then we will get addicted to them. Again.
I’m talking about AI companions.
Physical devices like Alexa and Google Home with the versatility and intelligence of ChatGPT, the uncannily human voice of Sesame, and the business timing of the iPhone. Who would have thought that the alternative to smartphones would be something even more absorbent and potentially isolating?
Modernity will never cease to amaze me!
III.
I am not making this up to scare you. You can go watch the new Black Mirror season for that. Besides, I think technology is a great thing. Pessimism is not my default attitude to innovation. What I find terrible, beyond debate, is the social situation we are in—and perhaps that the only solution we can devise is a new device.
We tend to extend the badness of our maladies into the imperfections of the proposed antidotes. Which is another way of saying that a half-solution is frowned upon more than no solution. If it doesn't swiftly cure us fully, it is, at best, a sad band-aid and, at worst, a sustaining mechanism of the ailment.
But is it so terrible that companies like OpenAI (as unworthy as they might be of our gratitude) come forward to try and fix it? They wouldn’t have to if governments and institutions did. But they don’t.
Is profit-seeking so condemnable that even when there’s a potentially positive social by-product, we still prefer to deny the benefits (remember that ChatGPT is already saving lives)? I don’t.
So I say it:
AI companions are better than nothing.
(There's an argument to be made against the companies pushing this idea: They are the ones who put us in this situation in the first place. If not them, their predecessors. They tainted our society with phones and social media, and algorithms, and are now selling, because they give nothing away for free, the panacea. So why should we accept it? And I'd respond, advocating for a devil I don't trust: No one forced you to buy an iPhone. This feels like a bad argument until you realize that any counterargument will lead you to doubt whether you have free will. Yes, corporations seek to hijack our psychological vulnerabilities. And that's bad. But, can you not resist that? Stoic opposition doesn't work as a systemic remedy, but it works as an individual patch to keep yourself from numbing. If you don't, it reveals a lack of willpower; if you think you can't, it reveals a lack of free will. It is always worth protesting against the government and institutions for a society-wide treatment, but in the meantime, maybe don't stick around scrolling.)
IV.
If people won’t admit that an AI companion is better than nothing when asked directly, their revealed preferences might. A Harvard Business Review study has found that the number one use case in 2025 for AI apps is “personal and professional support,” which in my no-bullshit, cynical mind translates to “I am so alone that a chat with ChatGPT gets my mood above baseline.”
AI companions don't exist yet, and we're already resorting to these prototypes. OpenAI doesn't have to create this market, it's been here for a while. It burst open right as the streets emptied five years ago.
The HBR study further divides the support category into “therapy” and “companionship,” which I think is a useful separation. ChatGPT works fine if you need to let off steam now and then (I mean, even scribbling some ungrammatical thoughts on paper, taking a walk, or crying at the walls of your empty home works miracles). But therapy is a different story. Not even talking to a trained human psychologist works at times (I’m not sure what that’s a testament to: the depth of human suffering, the limitations of therapy, or how unprepared we are for the situation we’ve gotten ourselves into).
So using ChatGPT to conduct therapy on a self-diagnosed condition is… risky. Don't you agree? Ok, now you go tell the 100,000 South Africans who share one human psychologist because they don't have access to more, that ChatGPT is risky and can't cover their needs, so they shouldn't use it. Tell them that it is increasing their loneliness and decreasing their socialization, and report back their response.
Mainstream discourse on mental health would make you feel that you only suffer truly—that your pain is only acceptable to express or even to privately acknowledge—if there's a diagnosis to show around. If you don’t have one and merely chat with a chatbot, then your pain isn't such. And thus, you’re an idiot for thinking that a “glorified autocomplete system” can validate your pain or, God forbid, help soothe it.
We can endlessly debate whether it’s good or bad to use ChatGPT as therapy (or whether for most people “therapy” differs at all from “companionship”) from our comfortable armchairs in Western developed countries without realizing the ridiculous tone-deafness of the implicit premise: what kind of privilege you must be high on to even consider this a matter that requires policing. Such developmental paternalism, such luxury beliefs.
V.
Anyway, the study is not the most scientifically rigorous I’ve read. I'm sure the author, Marc Zao-Sanders, would agree with me on that. The methodology is weak, and the findings are likely not reproducible (Marc searched and classified Reddit comments and Quora posts himself).
But still, I choose not to doubt the story. It fits. It fits with the state of the world, thoroughly reported in papers, studies, blog posts, and exchanges you pass by in the street. And it fits my anecdotal experience, a mirror of the results: intimate conversations on “how do you use AI” have gone from “work, search, or learning” to “I just needed someone to talk to” pretty quickly in the past year.
I’ve used ChatGPT myself for this purpose, and the results have been surprisingly good (suffice to say that a half-written essay preemptively entitled “The Algorithm That Saw Me” sits in my drafts folder). It didn’t tell me anything I didn’t already know. But hearing my own thoughts reflected at me with the cold compassion of a chatbot trained to be pathologically amicable and agreeable? Strangely, it helps.
You are judging me right now, I know that. I would.
It is so easy to judge people’s decisions against an ideal where human friendship is truly cherished and revered, not bludgeoned to death by the thousands of decisions we make every day to its detriment: “I have too much work today,” “perhaps some day next week?,” “I'm so tired,” “I won't be able to make it tomorrow, sorry.”
But the real-world people we easily judge have real-world frictions, circumstances, and hurdles that drift them away from that ideal.
We don’t wake up yearning to take up the phone to spend the day scrolling, but still, we do. We all do. Isn’t that called a pandemic? And what’s a pandemic if not the business opportunity of a lifetime? This reminds me that I have yet to answer the most important question.
How do I know that companies are working to leverage the existence of a market for AI companions, and how are they going to make it?
Hello friend, just a quick note before you read on—
I write this newsletter in an attempt to understand AI and offer that understanding to others who may find themselves similarly disoriented (who isn’t these days…) As an ad-free, reader-supported project, it survives thanks to a small group of generous readers who support it with $10/month or $100/year.
Today, I’m offering a 20% yearly discount at $80/year. If you find value here or simply wish for this effort to persist, you are most welcome to join them. If you already have, my sincere thanks. This place exists because of you.
VI.
You only need to look closely at OpenAI’s recent releases and planned business moves to realize (like others did before me) that they're desperate to be first to what looks like a niche, but is potentially a multi-billion-dollar opportunity. Here’s the list:
Voice: Sam Altman has an obsession with the movie Her and with Samantha (and Scarlett Johansson, who voiced her, by extension). This insistence on adding a remarkably human voice to AI is not just a testament to how captivating that movie was for science-fiction enjoyers, but the business reaction to our tendency to anthropomorphize anything that sounds like this (especially if it learns not to interrupt so much). Voice, by itself, is a multi-billion-dollar market.
Memory: What’s worse than a friend who always has an excuse to bail on you? A friend who doesn’t even remember that they had an appointment. OpenAI fixed this on ChatGPT in a recent release. They insisted it isn’t “just another product feature,” but I believe people will overlook the significance of solving memory: As great pattern-matchers, AI models with access to the entire archive of chats you’ve exchanged will know you, in a way, better than your human friends.
Device: The one thing that would put Sam Altman in the CEO hall of fame and OpenAI in the Forbes list of “Top 10 most valuable companies in the world” is launching an AI device that successfully challenges the phone. Altman and iPhone designer Jony Ive have been working on this for a while. Recently, The Information reported that OpenAI is discussing buying the startup: “Potential designs include a ‘phone’ without a screen and AI-enabled household devices.”
Image: This is tangentially related but important. Ghibli Day was not just a fun, tender collective moment that burned through OpenAI’s GPUs. It was ChatGPT’s most viral week ever—probably up to 10x or even 100x any other. Talking with Chris Anderson at TED on April 12th, Sam Altman said ChatGPT was at 500 million weekly active users (up 100 million since February), but Anderson quickly corrected him: “But backstage, you told me that it doubled in just a few weeks.”
NSFW: You may wonder what “not safe for work” content has to do with AI companions. Well, I’m not sure I can help you figure that out. Give it some imagination. OpenAI has plenty, so they know moving away from policing what users can or can’t do is good business. As Altman said in a Reddit AMA in October 2024, “We totally believe in treating adult users like adults.”
Dependence: I mentioned above, in passing, a study by MIT Media Lab in collaboration with OpenAI that qualifies HBR’s findings: We seek personal support, yes, but once we get it—and the more we get it—we become highly dependent on the chatbot and further seek social isolationism: “Overall, higher daily usage . . . correlated with higher loneliness, dependence, and problematic use, and lower socialization.”
Sycophancy: We say we want honesty, but what we actually want is honesty-disguised adulation. That's the sweet spot. OpenAI knows this. Your AI companion will cherish you. And idolize you. And worship you. And you may think you will get tired of it, but you won't. If the camouflage is sufficiently good, you won't. Because if we don’t come around to cherish our friends, then the next best thing is to be cherished by them, human or not.
Enjoy your new world.
This will solve a problem the same way treating cancer with radiation does. It gets you through the dark time, may make you sicker in the short run, and let you live long enough to potentially get leukemia from cells altered by the treatment or cause other health effects like bone loss or infertility (in this case more of a self-afflicted abstinence). Treatment for the youngest among us will have the hardest, least predictable long term effects.
I see the potential good. I also see the very likely bad. On an individual level, I see this helping and very likely saving lives. On a societal level, I see it further isolating humans from each other. Phones were meant to bring people closer, but became a means of keeping people separated by keyboards. It's the tools we build on top of this new "device" that will shape societal impact. Perhaps a cautious optimism mixed with measured skepticism is one path forward.
Thanks for interesting read.
One note on sycophantcy: actually I find the other top models from other AI companies way worse than OpenAI's models in this behavior...