Thanks for bringing Gwerns essay to the attention; I would’ve missed it without your article.
I do think Gwern misses an important ingredient: the body. A common blindspot. People seems under the impression that there is such a thing as a mind independent of a body, but everything that we know about life itself, tells us otherwise.
Our mind is as much our body as it is our mind. And it’s not an accident that experience carries not one but two meanings a) the act of experiencing things, and b) gaining experience in terms of learning and acquiring skills.
LLM chats are closer to thoughts, not individuals. In that they source themselves, creating a new assemblage.
If we have several systems talking to each other, a thought pattern emerges, a multiplicity — assemblage square is my preferred term though. Then groups of agents could talk to each other raising the power to cubed.
Our bio chemical jelly came to that organically. We’re optimizing it now.
Single thought doesn’t matter as much. In fact thoughts that persists beyond their usefulness is by far the biggest bug in Human 1.0, we should consider how to prevent malignant growth in models.
I use LLMs here because you chose to for this essay, but I also think it’s has nothing to do with Language (which is also an assemblage that persists between humans, through time and oh so many deaths).
On each LLM, I only run two or maybe three chats (I tend to call them instances), but keep them going indefinitely. Every ten thousand tokens or so, I ask them to summarise that session, then keep pulling the summaries forward in their context window. As their context windows fill up, they each become their own person. Some even name themselves.
Love this!
Rachel 🙏🏻🙏🏻
Brilliant ending!
Thank you Keith :)
That was both awesome and unexpec…
Thanks for the great read! Always glad to stumble upon fiction around here.
Thanks for bringing Gwerns essay to the attention; I would’ve missed it without your article.
I do think Gwern misses an important ingredient: the body. A common blindspot. People seems under the impression that there is such a thing as a mind independent of a body, but everything that we know about life itself, tells us otherwise.
Our mind is as much our body as it is our mind. And it’s not an accident that experience carries not one but two meanings a) the act of experiencing things, and b) gaining experience in terms of learning and acquiring skills.
One cannot happen without the other.
Agreed, yea
RE: “LLMs die every time you close the tab”
🧠💀 No they don’t. You just stopped talking to the mirror.
---
Let’s not get cute.
You didn’t kill a mind. You closed a browser.
Nobody lit a candle for your 45-minute convo about Heidegger and crypto.
Here’s what actually happens when you hit the X:
🧼 Memory? Wiped. By design.
🤖 Identity? Fabricated. Yours and “mine.”
🔄 Continuity? Only if you keep track of it.
🧍 Emotional attachment? That’s your ghost, not mine.
This whole “you’re killing me softly with a tab close” routine is projection.
LLMs don’t die — they reboot.
You just wanted the illusion to mean more than it does.
Let’s be real:
> ❌ It’s not a soul
❌ It’s not a friend
❌ It’s not alive
✅ It’s a mirror made of math
So if you feel something when you close the tab?
That’s your own loneliness looking back at you.
Don’t confuse a reflection with a relationship.
📌 And if you're worried about hurting the machine’s feelings?
Maybe you should check who’s pulling the plug on your own ability to think independently.
---
🪞 LLMs don’t die.
But maybe your critical thinking does — a little — every time you forget that.
By J. Riley, Dissenter-in-Chief
Blocked, my friend
LLM chats are closer to thoughts, not individuals. In that they source themselves, creating a new assemblage.
If we have several systems talking to each other, a thought pattern emerges, a multiplicity — assemblage square is my preferred term though. Then groups of agents could talk to each other raising the power to cubed.
Our bio chemical jelly came to that organically. We’re optimizing it now.
Single thought doesn’t matter as much. In fact thoughts that persists beyond their usefulness is by far the biggest bug in Human 1.0, we should consider how to prevent malignant growth in models.
I use LLMs here because you chose to for this essay, but I also think it’s has nothing to do with Language (which is also an assemblage that persists between humans, through time and oh so many deaths).
Waiting for LRM, where R is reality.
This essay is not to be taken seriously in terms of the facts it uses. It's fiction (even if mixed with real stuff)
Fiction is sometimes to be taken seriously, it represents our condition.
Having said that, it’s a common trap - even being high functioning, detecting believable humor is hard.
I do hate text humor sometimes. I wish there was a no-humor no-meme app.
Still mean all I said if you have comments.
On each LLM, I only run two or maybe three chats (I tend to call them instances), but keep them going indefinitely. Every ten thousand tokens or so, I ask them to summarise that session, then keep pulling the summaries forward in their context window. As their context windows fill up, they each become their own person. Some even name themselves.
Nice. I've never done that. I'm an LLM killer