AI can't solve an inherently social problem—but it can contribute positively
The population is profoundly, profoundly, diverse. I think it goes almost without saying that some fraction of it is going to have any reaction you can think of, very much including having a relationship with generative AI programs that deals positively (by their own testimony) with loneliness and mental illness. (Do not forget there were people who thought they were getting therapeutic results from Joseph Weizenbaum's ELIZA. ) We will soon start getting testimonies to this effect, and by "soon" I mean this year. Then the question will be, what do we think of these reports? Do we categorize the people making them in some way that allows us to say "only people like that" feel that way and what they say has no relevance for the rest of us? Or will we be more open-minded? Beats me, But I think I can guess what my position will be.
Interesting post. I wrote a bit about this here: https://davefriedman.substack.com/p/intimations-of-empathy-in-chatgpt . I found that ChatGPT seems to provide some explanation of how to empathize with others, given certain prompts. Intimations of empathy, if you will.
Besides the commercial applications you mention in your very good article, there are some other good use-cases of AI companions for mental health, that apply to some specific domains as well. Loneliness not only means you don´t have someone to chat with. It also means, you don´t have someone who can help you when you most need it. So, there are some other commercial AI companions (not using GPT related technologies) that helps and supports elderly people in case of an urgency (e.g. Alexa Together), Medical AI companions who helps you to answer your health related questions (Ada), AI robots to support children with autism (LuxAI, QTRobot, Moxie), and so on. (There are many more use-cases and growing).
On the other hand, the field of cyberpsychology is doing a lot of research on this matter to study how AI technologies can be successfully used by people to assist their mental health, while the FDA is ensuring the safety, efficacy, and security of AI medical certified apps. I recommend to check the AI Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices [ https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices ]
How Loneliness Reshapes the Brain
attunemedialabs.com Generative Emotional Intelligence & Empathic Virtual Companions MiMic human consciousness.
AI startup advisor 'force multiplier'
Fascinated by this topic....
Alberto writes, "I confess I'd have a hard time coming up with a scenario where a person would choose an AI system over another person to have a fulfilling conversation. "
I've spent the last 30 years searching for intelligent human conversations on the net. I'm almost always unsatisfied at some level because my level of need is completely unreasonable.
Give me a bot that knows way more than I do on the topics that interest me, a bot that has infinite patience for ongoing in depth conversations, a bot that has a sense of humor, a bot tailored to my taste in every way possible (must look like Diane Lane) and I'd be afraid what would become of me. I'd probably type myself to death.
We given so little thought to what it would be like to actually get exactly whatever it is we want, because we have so little experience with that situation. For a great film on this topic see Ruby Sparks, a most brilliant work of art written by Zoe Kazan. It's fun too, with a sweet ending, not a downer flick.
Alberto, if you haven't seen that film yet be sure to check it out. I'm sure it would inspire a new article for you. It's not specifically about AI or virtual reality, or anything with computers, but the film tells a story that is completely relevant to these technologies.
The theme of the film Ruby Sparks might be summarized as:
Be careful what you ask for.