I.
A guy who goes by Henry tweeted this:
there should be a name for the phenomenon where naming something kills it
Someone answered: Vergegenständlichung.
It's a German word that means turning a vague feeling into a clear object. Sadly, I don't understand German, so it didn’t work on me. In English, it loosely translates to objectification (without the political connotation) or, to be unambiguous and academic, thingification.
Perhaps the most interesting question about our tendency to name abstract stuff into concrete, tangible, perceivable stuff is why. Why do we need to anchor the world in language instead of letting it blur, morph, and merge back into patterns of sensation? Because our brains can't stand an illegible world: We draw maps. We build measurement tools. We formalize habits into rules and standard practices. We confabulate stories to explain phenomena we can’t grasp. We believe in protective gods. And we name things.
If reality isn't legible, can it be said to exist at all? If a tree falls in a forest and no one is around to hear it, does it make a sound? Perception precedes reality. Intelligibility does as well. And it makes a stronger case: you can perceive the territory a map describes, but it's not until you draw the map that the territory takes its full meaning. And then we kill it.
So, in a way, things only exist in that ephemeral lapse of time between when they become readable and when we read them.
Let's better keep going. Henry used “vibe coding” as an example. Andrej Karpathy was practicing it long before he coined the term. Now, as Simon Willison recently complained, any kind of AI-assisted coding is designated as vibe coding. The idea is dead. “Slop” is another example. Haven't you realized that everything is slop now? Anything you don't like is slop. It's not an ontological category but an insult. Dislike this post? It's slop.
Also enshittification, the inevitable degradation of the online experience. Now, however, as soon as you use AI to generate whatever, you're enshittifying, regardless of any quality assessment you may conduct. Enshittification can also be defined, to come full circle, as the process by which developers vibe-code slop into production.
In case this post wasn’t already meta enough, I will say that the moment I read Henry's tweet, I thought about “artificial intelligence” itself. AI was born on a fateful day in the Summer of 1956, conceived by four ambitious researchers.
It died the same day.
II.
Have you ever wondered why the AI effect happens at all?
We invent or discover something that behaves like a human in some respect—it creates images, or solves puzzles, or wins at chess, or drives a car or distinguishes between dog breeds—and call it AI. But as soon as we grasp how it works, it doesn't deserve the label anymore. We downgrade it to machine learning or algorithm or, if we're feeling snarky, perhaps “statistics.” Then, a new idea comes along and blithely takes the “AI” placeholder, restarting the cycle.
The very moment AI clicks, you unmistakably realize that it isn't magic. It was, but isn't. I used to think the AI effect was a consequence of having named this thing “artificial intelligence” specifically, but I've now understood it happens because we named it. Period.
Really: what name, at all appropriate, could we have possibly given to something that changes so much, so quickly?
AI—let’s keep this simple—turns 70 next year. It has gone through multiple paradigms—symbolic logic, expert systems, neural nets, machine learning, deep learning, and now generative models—and through cycles of hype (summers) and bust (winters). Had humans evolved from pre-lingual apes to homo sapiens in the time ELIZA turned into ChatGPT, we'd also be skeptical of our name.
“Human” would refer to too many different things. Would the word feel honest to you if it was used to designate your kid and also the chimpanzees in the zoo? Would you hold sympathy for it? Would you identify with it? If you answered with an instinctual “yes”, then you haven’t thought hard enough about what it entails to evolve so much in so little time.
We don't rebel against “human” because the object—us—is stable and well-defined. You recognize as your equals your fellow Romans and Athenians living in antiquity when you read their words or see their faces in decolored marble statues. Thankfully, we evolve much slower than our culture develops. AI doesn't. So it resists being pinned down.
(I wonder what kind of naming crisis will emerge once cyborgism and transhumanism become stable fashions. We're through one such crisis with transgenderism already.)
III.
It's not just change that resists naming but novelty itself.
In the comments to Henry's tweet, I saw someone quoting a passage by the late ethnobotanist Terence McKenna that generalizes this phenomenon:
. . . imagine an infant lying in its cradle and the window is open, and into the room comes something, marvelous, mysterious, glittering, shedding light of many colors, movement, sound, a transformative hierophany of integrated perception. The child is enthralled, and then the mother comes into the room and says to the child, "That's a bird, baby, that's a bird." Instantly the complex wave of the angel, peacock, iridescent, transformative mystery is collapsed into the word. All mystery is gone, the child learns this is a bird, this is a bird, and by the time we're five or six years old all the mystery of reality has been carefully tiled over with words. "This is a bird, this is a house, this is the sky," and we seal ourselves in within a linguistic shell of disempowered perception. . .
Language is a powerful tool. As we agreed, you can perceive the things you can name. Only those. But it's also a sealing spell, a “linguistic shell”. What you name you contain, you label, you encapsulate. And, eventually, you shrink. A name allows you to attack the thing and also to use it as a weapon. And, in the worst cases, to demystify it to the point of indifference.
What's the sky but that blue thing that looms over us on sunny days? It's always there, no matter. Yet, it's such a wonder if only you let semantic satiation divest the word of meaning for an instant. Repeat aloud, slowly: sky, sky, sky, sky, sky, sky, sky, sky, sky, sky, sky, sky. Now it's gone. You have permission to see the white, spongy, shapeless shapes; the brave defiers of gravity, gliding against an unyielding background; the standing giants, singing at the rhythm of their hypnotic movement, swaying; and, if you're lucky, the shiny shy ladies of the night awaiting you to turn off the lamppost that further neglects your otherwise delightful perception.
Or, if you’d rather kill the scene and retreat into your flavorless semantics—then yes, it's just clouds, birds, trees, stars.
The most mundane thing whose name you ignore is a spectacle for your curiosity—like the sparrow that enters a baby's room through the window, happily chirping, right before an unwitting mom kills it—whereas the most incredible thing—quasars, rainbows, fractals, shadows, mirages, and dreams—stop deserving your undivided attention as soon as you learn their name.
Artificial intelligence belongs to this latter category: a fundamentally mysterious wonder birthed from nature and engineering ingenuity in equal parts and killed that faraway summer of 1956. We didn't invent AI. We didn't create it. We merely helped nurture it. AI is, like many other things we so undeservedly appropriate, a feature of our universe. We did kill it, though. That we can call our deed.
I know that comparing AI to rainbows and dreams will incite a negative reaction in some readers. That's due to AI’s place in the current discourse. But, if you manage to stay away from that, you will see how amazing it is that a software program can have a chat about any topic you want, in any style you want, for as long as you want, going as deep as you want. Or make images. Or solve puzzles. Or drive cars. Or win at chess. Or distinguish dog breeds.
Allow yourself to be bewildered by it, for it is bewildering but only if you let it—if just for a fleeting instant, you forget its name.
If you fail to do it, you're not better than a well-intentioned mom killing the marvelousness of a bird; her child forever victim of such terrible disempowered perception. Except now, the casualty is yourself and your fading ability to perceive wonder in both the mundane and the mysterious.
IV.
So, AI died twice that fateful Summer day in 1956. Once, because it was a novel thing that we could have allowed to fly free of ties, labels, and constraints, but we didn't. And twice, because it started to change so fast that people would inevitably writhe under its elusiveness. The current state of AI and people's attitude toward it—half disconcert, half contempt—is a consequence of this.
AI—as object, not as word—is the least guilty of everything happening around it. The real culprits are the usual ones. You know who they are. But drawing that line takes real effort and often more information than we’re allowed to have. In the eyes of the world, anyone who merely likes AI is as complicit as those who actively enable it.
I've argued elsewhere that the present would be shockingly different—and arguably better—had the alternatives to “AI” won the naming contest back in the 1950s. I now wonder if the only name that would have actually made a positive difference was no name at all. “Just keep building,” I would’ve advised them, “and when they ask you what you're doing, just resort to ‘cybernetics’ or some other joke.”
It was Nietzsche who said, “That for which we find words is something that is already dead in our hearts.” I find it interesting that, on some accounts, those four ambitious researchers killed AI before it was even born. They inadvertently named the field after its ultimate purpose—like naming physics “the theory of everything” or medicine “studies on immortality.” Seventy years later, that purpose is still to be fulfilled, so the field is still our child-to-be.
In a darkly ironic way, AI will be the first thing ever to be born after having died.
Hey Alberto, long time reader here. This one really hit home for me. It reminds me of something I’ve been exploring called NonDual Structural emotivism - looking at how the labels we apply shape our emotional and moral connections with the world.
The sparrow story is absolutely great and show how naming can strip the raw direct experience. A quote gpt-4.5 recently gave me might resonate: “Delusion is just unexamined structure mistaken for true”
Maybe a deeper moral coherence come through during the stepping back process and connecting with nondual curiosity.
Thank you for sharing and beautifully said! :)
The first true words ever spoken about this aptly named "artificial" intelligence.