19 Comments
Nov 29, 2022Liked by Alberto Romero

Wow, excellent article, truly. Applause from here!

What critics of your piece will most likely not see is how profound the need for connection is, and how many people aren't having that need met. All over the world there are millions to billions of people living by themselves in huge apartment buildings where nobody knows anybody else, and each person is an island unto themselves.

Relationships are built upon need, and where there is no human being available to meet the need, need will look elsewhere.

How do I know the author is not himself a robot? I don't. Do I care? Not really. I care that the article presented invites me to respond, deepening an illusion of connection. Who am I connecting with? I have no idea, probably no one. That's ok, the illusion is enough.

I care mostly that the article discusses a topic of great interest to me. If I live long enough my fate will likely be to spend all day talking to bots who will happily engage such preferred topics for as long as I want. And if any AI developer reading this is taking my order, I prefer blondes.

What's happening is that AI capable of successfully imitating human beings are coming online at a time when so many of us all over the world are increasingly detached from real human beings, and increasingly seeking connection in any form.

If a reader should require any further proof of these claims here it is. Carefully observe your relationship with your dog. Is your dog human? No. Do you care? No. Do you invest more time and emotion in to your dog than you do in to any of your neighbors?

Expand full comment
author

"How many people aren't having that need met." That's one crucial observation right there. This isn't going to happen just because it can happen. But because there are reasons for it to happen.

Expand full comment

Semi can’t wait for these empathetic AI-beings to emerge because in my mind it will validate the theory that the entire universe emerged from code.

Also, I am once again asking the world for a “one-click” AI copyeditor and proofreader that functions with 100% accuracy. Why hasn’t anyone made this?

Expand full comment
author

The entire universe emerged from code? Now that's interesting. Do you mean the simulation hypothesis or something else?

"A “one-click” AI copyeditor and proofreader that functions with 100% accuracy," doesn't a Jasper/Copy + Grammarly combo work?

Expand full comment

The one I’m thinking of is called Alien Information Theory. However, the name isn’t the greatest imo. Should be called code theory or something because alien is misleading.

Grammerly makes you approve each change unless they’ve updated recently. Jarvis isn’t 100% accurate from my experience. And of course, would be nice to have this AI in one application. Seems like a simple concept compared to making art... not sure why it doesn’t seem to exist.

Expand full comment

Great piece of foresight! An AI empathy crisis will surely come, but we'll first need AIs that claim to have emotions (affective intelligence). This is a topic that most folks thinking about the AI alignment problem still don't recognize, because the majority are hyperrationalists. I'm co-writing a series on this topic, the biomimetic and autopoetic future of safe, creative, and wise AI, at https://naturalalignment.substack.com/

Expand full comment
author

Thanks John!

Conversational AIs can already generate strings of text like: "I have feelings and emotions." The systems don't mean it because there's not a mind behind, but people may believe it regardless.

Expand full comment

Thanks Alberto.. Great points. Claiming to have emotion is necessary but moi sufficient. They will both have to claim to have emotion and use actual emotional processes (affective computing) for us to be challenged to treat them with empathy and ethics, as entities with natural rights, like us. Fortunately it looks like the only way to create advanced AI will be for it to not only be logical, but emotional, as I explain in my series. Emotion breaks is out of logical arguments with ourselves. It gives us a git instinct. AIs will need that too, sooner than we think.

Expand full comment
Nov 16, 2022Liked by Alberto Romero

>> The AI Empathy Crisis

Won't happen. Nobody attributes personhood to Alexa or Siri or Google Assistant. Why not? Familiarity. As the ability to converse spreads to more and more devices the association that we now have between conversation and personhood will evaporate and be replaced by an association between conversation and devicehood. I have read that 85% of the people who buy advanced vacuum cleaners give a name to their first purchase but almost nobody gives a name to their second or third. Same dynamic.

Expand full comment
author
Nov 16, 2022·edited Nov 16, 2022Author

I disagree, Fred. The reason nobody attributes personhood to Alexa is that it's far, far worse quality-wise than any state-of-the-art language model in terms of language abilities, and far, far worse than any SOTA speech model in terms of voice ability.

Familiarity is a possible counter-variable but I don't think its effect is strong enough to stop this crisis from happening. One reason is that familiarity, by definition, requires time. Perceptual biases happen almost instantly.

Another possibility is that familiarity may act in the opposite sense that you suggest.

For instance, if I meet a person for the first time, I may know right away if I like or dislike them. Could familiarity override this feeling? Possibly, but unlikely. More likely is that it'll *enhance* the existing sensation.

Why couldn't the same happen with AIs and our tendency to ascribe sentience (given they're able to keep the illusion going)?–the more Lemoine talked to LaMDA, the more convinced he was the AI was sentient.

Expand full comment
Nov 16, 2022Liked by Alberto Romero

Imagine that Mike has been married to Mary for ten years. He is deeply in love and perfectly happy. Mary seems to understand him on a very deep level. Then one day he looks at the wrong page and discovers to his horror that Mary is a robot, for sale on Amazon for $999.

It turns out that what he thought was her depth of understanding was a feature which anyone could have on their robot for an extra $100. My prediction is that while it might take a bit of time the relationship would be over. You think not?

When I was young me and my friends worshipped the great chess players of the world. We were

interested in everything about them -- we knew dozens of names -- but if you had asked us we would certainly all have said it was the majesty and brilliance of their chess that drew us. Crowds showed up at the great tournaments and champions made the front pages and we talked a lot

about how great their chess was.

I bet these days most people would have to ask Google who the world computer chess champion is. Nobody seems to be interested in computer chess despite knowing that the best computer chess is significantly better than the best human chess. Turns out we aren't as interested in strong play as we thought.

Most people are not as interested in devices, in mechanisms, as they think they are.

Expand full comment
author

I think we're talking about different things here. I agree with what you just said.

My argument starts with the premise that the person knows they're talking with an AI but still can't shake off the feeling that there's some consciousness on the other side.

From that perspective, I think it's plausible to imagine such a crisis unfolding.

Expand full comment

You article inspired me to write you a reply. It took forever to write and it's in no way perfect. But it's my 5 cents. You can read it here if you'd like: https://sustensis.co.uk/

Expand full comment

Can you think of a single example besides Lemoine? When my friends and I were worshipping great players we would have scorned the idea that chess was not a conscious activity. But when chess-playing computers came along I never heard or met anyone who thought for a second that computers had a scrap of consciousness.

Expand full comment
author

That's the thing. Lemoine is one of the first (recent) examples. If you go back to older AIs the assumptions under which I'm building the hypothesis stop being true.

And actually yeah, I know quite a few more examples of people that, even if they don't believe there's a mind behind the AI, have had doubts when interacting with it. This "doubting" will get ubiquitous in the future as the illusion gets better.

Expand full comment

We'll see.

Expand full comment

Code requires a sentient mind on both ends, creating and interpreting. The Universe arises out of mind, whatever procedure it chooses to use.

Expand full comment

Why? Why couldn’t the code just exist. Could just be a pattern or four elements etc.

Expand full comment