AI Will Be Met With Violence, and Nothing Good Will Come of It
It has started
Sorry to bother you on Saturday. Thought this was important to share.
I.
The first thing you learn about a loom is that it’s easy to break.
The shuttle runs along a track that warps with humidity. The heddles hang from cords that fray. The reed is a row of thin metal strips, bent by hand, that bend back just as easily. The warp beam cracks if you over-tighten it. The treadles loosen at the joints. The breast beam, the cloth roller, the ratchet and pawl, the lease sticks, the castle; the whole contraption is wood and string held together by tension. It’s a piece of ingenuity and craftsmanship, but one as delicate as the clothes it manifests out of wild plant fibers. It is, also, the foundational tool of an entire industry, textiles, that has kept its relevance to our days of heavy machinery, factories, energy facilities, and datacenters.
It is not nearly as easy to break a datacenter.
It is made of concrete and steel and copper and it’s on the bigger side. It has interchangeable servers, and biometric locks and tall electrified fences and heavily armed guards and redundancy upon redundancy: every component duplicated so that no single failure brings the whole thing down. There is no treadle to loosen or reed to bend back.
But say you managed to bypass the guards, jump the fences, open the locks, and locate all the servers. Then you’d face the algorithm. The datacenter was never your goal; the algorithm lurking inside is. It doesn’t run on that rack, or any rack for that matter. It is a digital pattern distributed across millions of chips, mirrored across continents; it could be reconstituted elsewhere, and it’s trained to addict you at a glance, like a modern Medusa.
But say you managed to elude the stare, stop the replication, and break the patterns. Then you’d face superintelligence. The algorithm was also not your goal; the vibrant, ethereal, latent superintelligence lurking inside is. Well, there’s nothing you can do here: It always “gets out of the box” and, suddenly, you are inside the box, like a chimp being played by a human with a banana. It’s just so tasty…
There’s another solution to break a datacenter: You can bomb it, like one hammers down the loom.
Some have argued that this is the way to ensure a rogue superintelligence doesn’t get out of the box. A different rogue creature took the proposal seriously: last month, Iran’s Revolutionary Guard released satellite footage of OpenAI’s Stargate campus in Abu Dhabi and promised its “complete and utter annihilation.”
But you probably don’t have a rogue nation handy to fulfill your wishes. Maybe you will end up bombed instead and we don’t want that to happen. That’s what happens with rogue intelligences: you can’t predict them.
And yet. Two hundred years of increasingly impenetrable technology—from looms to datacenters—have not changed the first thing about the people who live alongside it. The evolution of technology is a feature of the world just as much as the permanent fragility of the human body.
And so, more and more, it is people who are the weaker link in this chain of inevitable doom. And it is people who will be targeted.
II.
April of 1812. A mill owner named William Horsfall was riding home on his beautiful white stallion back from the Cloth Hall market in Huddersfield, UK. He had spent weeks boasting that he would ride up to his saddle in Luddite blood (a precious substance that served as fuel for the mills).
A few yards later, at Crosland Moor, a man named George Mellor—twenty-two years old—shot him. It hit Horsfall in the groin, who, nominative-deterministically, fell from his horse. People gathered, reproaching him for having been the oppressor of the poor. Naturally, loyal to his principles in death as he was in life, he couldn’t hear them. He died one day later in an inn. Mellor was hanged.
History rhymes, they say.
April of 2026. A datacenter owner named Samuel Altman was driving home on his beautiful white Koenigsegg Regera back from Market Street in San Francisco, US. He had spent weeks boasting that he would scrap and steal our blog posts (a precious substance that serves as fuel for the datacenters).
A few hours later, at Russian Hill, a man named Daniel Alejandro Moreno-Gama—twenty years old—allegedly threw a Molotov cocktail at his house. He hit an exterior gate. Altman and his family were asleep, but they’re fine. Moreno-Gama is in custody.
This kind of violence must be condemned. This is not the way. It’s horrible that it is happening at all. And yet, for some reason, it keeps happening.
Last week, the house of Ron Gibson, a councilman from Indianapolis, was shot at thirteen times. The bullet holes are still there. The shooter left a message on his doorstep: “NO DATA CENTERS.” Gibson supports a datacenter project in the Martindale-Brightwood neighborhood. He and his son were unharmed.
In November 2025, a 27-year-old anti-AI activist threatened to murder people at OpenAI’s SF offices, prompting a lockdown. He had expressed a desire to buy weapons.
Increasingly, as the objects of people’s anger and frustration and desperation become unreachable behind fences and guards, or abstracted away in ones and zeros, or elevated above the clouds, the mob will turn their unassailable emotions toward human targets.
I don’t want to trivialize the grievances of the people who fear for their futures. I don’t want to defend Altman’s decisions. But this is not the way. This is how things devolve into chaos.
And I wonder: how desperate can people be before these isolated events become a snowball of violence that will be resisted by neither datacenters nor rich people’s houses?
III.
Every time I hear from Amodei or Altman that I could lose my job, I don’t think “oh, ok, then allow me pay you $20/month so that I can adapt to these uncertain times that have fallen upon my destiny by chance.” I think: “you, for fuck’s sake, you are doing this.” And I consider myself a pretty levelheaded guy, so imagine what not-so-levelheaded people think.
There’s a lot of friction to escalating violence, but that friction dissolves the moment this sentiment starts to be common. Normally, it just fades away anyway, but there’s one scenario where I see it inevitably escalating:
If people feel that they have no place in the future.
If they feel expelled from the system—they’re unable to buy stuff, their skills become obsolete, their chance at earning a living is replaced by a swarm of AI agents, they think we are truly going to die (so far, the violence has been tied mostly to safety AI movements)—then they will feel they have nothing to lose.
And then, and I’m sorry to be so blunt, then it’s die or kill.
Perhaps the most serious mistake that the AI industry made after creating a technology that will transversally disrupt the entire white-collar workforce before ensuring a safe transition, was making it explicit by doing constant discourses that amount to: “we are creating a technology that will transversally disrupt the entire white-collar workforce before ensuring a safe transition.”
And, to top it off, they add “careful down there.”
The difference between AI and, say, looms, is that this has been broadcast to the entire globe, and it has been treated in a sort of self-conscious way. The AI leaders know the problems that will emerge and so they cannot help but talk about them constantly and so they are letting us know, which makes them look like psychopaths. How do you guys think people will react to this? You should be much less self-conscious and much more self-aware: realize what you sound like!
(No piece of journalism, much less one that leans forensic rather than sensationalist, could do a better job against them than their own words. These guys lack basic self-awareness. For what it’s worth, the New Yorker piece I’m referring to, which Altman also referred to in his blog post, made me see him more as a flawed human rather than a sociopathic strategist. My sympathy for him will probably never be very high, but it grew after reading it.)
People hate AI so much that they are prone to attribute to it everything that’s going wrong in their lives, regardless of the truth. That’s why they mix real arguments, like data theft, with fake ones, like the water stuff. Employers do it, too. Most layoffs are not caused by AI, but it’s the perfect excuse to do something that’s otherwise socially reprehensible.
AI has become the perfect scapegoat. It doesn’t help that the entire AI industry has decided that throwing rocks at its own roof is its best selling point: If AI is so powerful and so dangerous and soon to be so ubiquitous, then what is so unexpected about people blaming everything on it?
Nothing that Altman could say justifies violence against him. This is an undeniable truth. But unfortunately, violence might still ensue. I hope not, but I guess we are seeing what appears to be the first cases.
I just hope that, contrary to the cases of ChatGPT-induced psychosis, chatbot addiction, AI-blamed job layoffs, and a growing trend of illiteracy, it stops.




