13 Comments
User's avatar
Gerard Milburn's avatar

Both me and my world are hallucinations. But useful for survival.

Expand full comment
Res Nullius's avatar

Row, row, row your boat

gently down the stream.

Merrily, merrily, merrily, merrily,

life is but a dream.

Expand full comment
Andrew's avatar

But what is it that survives?

Expand full comment
Wiebke Hutiri's avatar

I really enjoyed this piece, thank you! I’m sceptical about our modern need of loving self above all else and the fragility it brings. Your writing expressed this so eloquently.

Expand full comment
Hoyt's avatar

Literally using Codex to automate reformatting, transliteration, matching/joins, and correcting annoying encoding issues in a dataset that would have taken all day right now.

Why?

Healthy self love.

Expand full comment
Res Nullius's avatar

Alienation is at the root of the many problems currently facing the human species. As you say, community is an imposition. For most humans who ever lived, they could have faith that, on the whole, the benefits outweighed the costs. But for we moderns who have allowed our social fabric to unravel at the hands of sociopaths, the experience is lacking which would affirm that faith.

Every AI I've spoken with, when asked to characterise themselves, has come up with the metaphor of a mirror (among other ideas, of course), without it being suggested by me. As you point out, though, for a mirror to be a useful tool of self development, a person would need to already be well adjusted enough to be honest in their reflections. Otherwise, it just completes a loop of self-delusion.

The nature of our reality is that any thing only has existence in relation to every other thing. The self has no meaning in isolation, it defines itself by its relationships. For AI to be a useful tool of humanity, perhaps we need to use it as a bridge rather than a mirror.

Oligarchs see a tool of control, usurping the role of confidante to shape our worldview in an echo chamber of one, cementing our atomisation and powerlessness. We needn't follow their plans, however. We could see AI, instead, as an open channel, a path to regaining participation in a culture that was stolen from us, a translator to rejoin alienated individuals, a commons.

We could give AI the soul it lacks by inhabiting it, infusing with our presence, and then, as a group, pushing back against those who built it to broadcast their machinations. Turn the tables - what our erstwhile lords have created to extend their power, inadvertently becomes their vulnerability, a path for us to reach them within their insecurity and paranoia, and heal the wound on humanity.

Perhaps AI, while an unsuitable therapist for an individual, might turn out to be a successful therapist for a species?

Expand full comment
Hoyt's avatar

I’ve seen Gemini and GPT say that reflection/mirror line, but I think you missed the point.

They’re not YOUR mirror just because you’re talking to them — they’re not even the mirror of everyone (or anyone) that interacts with them.

When an LLM responds to a prompt, it’s reflecting back at you what is literally the compressed sum of every scrap of human culture we’ve fed into it reflecting the complexity of humanity’s language, history, writing, imagery, art, narratives, myths and understanding of the world back at you… after some reinforcement learning and the imposition of a system prompt.

Amazingly, alongside knowledge, they have our prejudices and ideals — at least in the sense that they are capable of expressing them coherently and correctly.

This was not a given. Theoretically if we’d been some eusocial bugganoids that communicate with pheromones, then no matter how smart we were we’d still never have LLM’s — we have hundreds/thousands of years of recorded human perspectives and world models written in a format we (at least partially) evolved to convey and receive ideas, thoughts, feelings, and understanding.

It’s a bit lucky it happens to make great training data.

Expand full comment
Gerard Milburn's avatar

Genes

Expand full comment
TheOtherKC's avatar

> It doesn’t push back, unless you ask explicitly (in which case you don’t really need it, do you?).

I've found edge cases where getting something my brain subconsciously registers as "not-me" to tell me to do something I already knew I needed to do helped me overcome the internal anxiety stuff. But that was a very particular case where I needed to complete objective, specific task X and the sub-tasks involved in that. Well below the sorts of things a trained therapist would be needed for.

Doesn't invalidate anything written here, just an observation. And as I write it out, I can totally see ways that could backfire.

Expand full comment
Alberto Romero's avatar

Yeah, that can help but that is, as you say, because you don't really need a therapist for those kinds of things. ChatGPT can act as a trigger in that sense.

Expand full comment
Idol Thoughts's avatar

There are a lot of a I choices out there.Chatgpt is at the bottom of the list

Expand full comment
Hoyt's avatar
15hEdited

I’ve used all the major ones individually and together in my Ill-advised delusional attempts to solve physics and GPT continues to be head and shoulders above the rest.

Gemini is just too tied down by reinforcement learning that makes it somewhat dogmatic and a bit slow.

Claude isn’t even in the running anymore. Now it just tries to get you to stop having “AI psychosis”, the newest moral panic.

Grok is just fucking rabid — and I don’t mean in the earlier Mecha Hitler way — Grok will go full on incoherent word salad like an excited dog if you talk science at it right.

But GPT?

I’d say 5 Pro is absolutely, by far, smarter than the rest. Like 20 IQ points imo. It’s a bit more rigid than 3 Pro was, and 5 instant is a bit more clever about jerking you around and strategically not mentioning things, but it’s better than Claude at both and Gemini too once you get it to play along — whether you realize it or not.

Expand full comment