This is the first part of a five-part essay on the psychology of AI nerds, a topic I’ve been wanting to explore for a while (I published a timid approximation recently).
Why “AI nerds”? I use the term liberally, but not derogatorily. It’s a shorthand to describe, under one overarching label that encompasses character, thinking patterns, interests, and goals, the psychology of those building AI for the industry at the highest level (arguably the weirdest demographic with the greatest power to shape our future). Concretely, I mean the kind of person who sees AI as an engine of unfathomable progress (their words), a vehicle toward utopia. Normal people see them as cultists, members of a new faith.
By virtue of variability and personal inclinations, however, the term is a statistical fiction, a stereotype: it captures some common traits that define the average AI person, but no individual can be faithfully encapsulated by the label. Many people working in AI don’t belong under this label, and others who don’t, do.
What’s my motivation to write this? Normal people (whom they refer to as “normies,” i.e., the majority of the population) misunderstand AI nerds. This leads to confusion when trying to answer one important question: why are they doing this AI thing the way they’re doing it? For instance, journalists tend to think AI nerds are moved by money and power, but that’s far from the whole story.
To the extent that money and power are involved, thinking solely in those terms is barely scratching the surface (thinking about money is thinking about executives and investors—the visible faces—not about the engineers and developers painstakingly building the AI models in the lab; and yet, without them, there’s no industry!). I hope to amend this critical misunderstanding.
What’s my goal from writing this? That you leave smarter than you came in about the people behind AI. I want to help you make sense of AI nerds and urge you to start thinking in terms of psychological traits rather than mundane incentives like power and money. AI nerds are shaping everyone’s future singlehandedly and unilaterally, so figuring them out is our last chance to prepare for that. Otherwise, any reasonable resistance or objection is dead in the water.
Do I have the authority to write this? I am confident the answer is yes. Two reasons: first, I live among AI nerds online, reading them, observing them, and lurking in AI circles constantly—I am, partly, an AI nerd myself. Second, and most importantly, I’m partly not an AI nerd. As a Spanish guy, I’m geographically and culturally divorced from their Silicon Valley influences; as a half-outsider, I can recognize their blind spots while being familiar with their thinking.
Disclaimer (for nitpickers): This is not a treatise on the full psychological profile of the AI nerd; I’m focused on their failure modes and flaws, so I won’t be praising their virtues (which are plenty, sorry to disappoint you if you thought they’re the devil incarnated!). Instead, I will try to explain to the “normie world” that already hates AI nerds why they can come across as unlikable without being evil, malicious, or ill-intended. I try to be fair, but I won’t be pulling punches.
Besides, for the sake of transparency, I won’t be citing this or that study on AI nerds’ psychology—that’s not a thing. This is fully taken from personal observation and knowledge.
Schedule: I will publish the second part tomorrow and so on for five days. I will upload them together as a single post once they’re all published. They’re not intended to be read as standalone pieces but in the order I publish them.
The entire essay is ~10,000 words—it has taken quite a lot of work and research, and putting things together into a coherent whole—so this is the only part that I will keep open to free subscribers. The rest will be behind a paywall. To my knowledge, not much has been written on this, so I hope it proves worth your time. Enjoy.
Reminder: I’m running a Halloween sale for free subscribers at a 33% discount until November 3rd (Monday). I probably won’t do another one until 2026, so make sure to get yours at the reduced price of $80/year (vs $120/year). (You can also get the normal monthly subscription at $12/month.)
I. Overexistence
I keep coming back to Sam Kriss’s fantastic line from In My Zombie Era: “Nerds are people who like things simply because they exist.” If you read Kriss, equal parts master prankster and master essayist, you will soon realize he’s an expert in many domains of thought—he’d hate this flattery, but it doesn’t matter because he seems to hate everything—but in none is he as adept as in finding novel ways to make fun of nerds, which, as a group, are his antithesis: people who seem to like everything.
AI nerds, for instance, like it when DeepMind’s AlphaFold cracks a fifty-year-old mystery in molecular biology, and also when the entire industry spends half a trillion dollars in building datacenters to train expensive models that are yet to make a noteworthy discovery.
They like it when Anthropic advances the state-of-the-art in mechanistic interpretability—a research branch concerned with understanding how AI models function internally—and also when Meta and OpenAI launch Vibes and Sora (AI video apps), allowing users to flood the internet with cheap slop, spoiling the epistemic customs. They like it when ChatGPT helps save the lives of a few unwitting users by providing medically accurate advice and also when it turns sycophantic (which means engagement and, consequently, revenue), even if this behavioral deviation entails collateral damage in the form of AI-induced psychosis.
AI nerds, as opposed to conscious technologists and normal people, who know how to separate enthusiasm for discovery from devotion to the God of excess, seem to like things regardless of valence or value. What matters to them isn’t the quality of the outcome or the meaning—life-saving or soul-draining, uplifting or addicting, doesn’t matter—only the fact that the machine did a thing that it couldn't do before.
To the AI nerd, the fact that something is possible is enough justification for its existence. They display an undiscerning worldview: creation without evaluation, critique, judgment, reflection, or restraint.
They provide a rationale for their stance: everything that exists eventually leads to wealth creation (wealth marketed, of course, as “well-being” or “flourishing”). To the AI nerd, the actions of selfish, short-term, profit-seeking players (e.g., Meta, OpenAI, and the constellation of tiny AI startups orbiting them) align for collective gain, despite each being individually selfish. AI nerds follow this logic and end up at Vibes and Sora and still can’t recognize that they’ve killed classical liberalism and repackaged its corpse into a self-serving narrative for the AI era. The rest of the world does.
AI nerds claim that “everyone for himself” equals “all for all,” or, in concrete terms (this is a real argument) that AI video slop apps → data and revenue for OpenAI and Meta → ???? → a cure for cancer. They tend to gloss over those interrogation signs, but, you know, who cares, the important thing is that they’re curing cancer!!! Life figures itself out in the end, the AI nerd says when pressed on this. From the premise that evil somehow cancels out against itself into a net win for society, you get a corollary: Any negative impact on the Meaningless Now is, quite literally, intrascendental.
That sounds reasonable… IF YOU’RE AN IMMORTAL, OMNIPOTENT GOD! (Or if you believe yourself to be one). But let’s not dismiss the AI nerd’s worldview too quickly.
Is there anything inherently wrong with abundance? I am not sure. History is the story of humanity against scarcity, survival against not having the bare minimum. Having plenty of goods at a cheap price is good; having people work to make this possible is also good. Doesn’t the chance of curing cancer outweigh the ills of having an AI TikTok? The problem, as it always is with these well-intended socioeconomic ideas, is that Goodhart’s law—don’t make your measure your target—eventually prevents the continuation of an equivalence we take for granted, even after it’s proved false; in this case, that abundance equals flourishing. Make more AIs that make more things, and you’ll realize, perhaps too late, that “having more things” is not what you wanted.
It’s easy to see how the “abundance=flourishing” equivalence breaks down by looking at the examples I used above: unconditional abundance is also the kind that leads to unbounded betting on a compute-intensive, data-intensive technology that’s yet to yield a return on the investment (for both investors and humanity), it’s also the kind that leads to a putrid epistemic landscape, spoiled by a flood of bad-quality information, it’s also the kind that leads to mental health problems like loneliness, depression, addiction, and, in the worst cases, psychotic breaks.
It’s inevitable: if you profess endless everything, you will have to deal with a lot of very ugly stuff. The fact that AI nerds hand-wave this problem by forecasting unfathomable wealth—ok, when? I don’t know!—as a response to criticism is a symptom of a darker truth. Infinite abundance/wealth is merely how AI nerds retroactively rationalize their otherwise unexplainable actions (like overlooking those bad outcomes). This is my contention, the core of my critique:
In practice, “liking things just because they exist” serves the AI nerd as a psychological proxy for a deeper psychological need: they enjoy seeing how the world drowns in chaos.
The reason why there are so many AI-loving nerds is that they conceive AI, on the one hand, as a field leveler—it won’t so much materialize the post-scarcity sci-fi world they’ve been fantasizing about since they were bullied in school as reset the stakes for everyone—and, on the other hand, as a utopian alibi (a lie they tell the world but also themselves): it’s all in the name of progress, wealth, well-being.
Here’s the problem: they pursue and welcome a state of affairs they claim will benefit all of humanity in the long term. Could be true, who knows? What they know is that this state of affairs will benefit only them in the short term. Let me explain.
Unfiltered creation does not attend to the objects being created (they’re unimportant) but to the process of infinite creation itself, which requires that we be ready for unprecedented change and uncertainty and chaos, and discontinuity. A world where a misstep entails, for instance, being lost in a mire of misinformation is a world where the only valid advantage is having seen it coming. That is, the advantage of the AI nerd.
When everything is suddenly possible, no one is better off except the one who knows, in advance, that everything is suddenly possible.
To understand why this is remotely something anyone would want, we have to attend to the AI nerd as an individual that is, by definition, ill-fitted to normie society (a true nerd would never deny this, for they don’t consider nerdiness shameful but the source of their otherworldly powers). At a conscious level, they do believe they’re doing good—“AI might take your job but will eventually cure cancer,” and whatnot—but the picture that unfolds at a subconscious level is quite distinct. AI nerds love to hammer their proverbial anvil, whether it’s complex number theory or matrix multiplications, or fractal woodworking (which, by the way, exists), and care about little else. That’s a maladaptive inclination in a world that favors soft skills and small talk. Furthermore, AI nerds are generally dismissive toward the aesthetic component, including but not exclusive to their physical appearance. (I believe the actual reason why they do this is that it’s not computable, that is, unaccountable for mathematically.)
The post-AI nerd imagines the abundant future as the revenge they couldn’t exact on their own; a reckoning for the normies and the “cool kids” who, in doing well socially, didn’t pay attention to the obvious signs of the incoming wave of socioeconomic changes and thus stayed in their no-nerds-allowed parties, drinking and dancing and having fun, unaware that the world was ending for them, too. “In time,” the AI nerd rejoices, “both normies and cool kids will be drowning.” To the post-AI nerd in Kriss’s sense, it doesn’t matter if AI kills the jobs or the people or both; what matters is to feel, just for once, that they are at the top of the pecking order.
This is not out of malice but rather divine compensation. They’re happy to receive their reward for having paid attention in class. AI, by nature so fuzzy and nebulous, acts as the perfect placeholder for such crazy delusions; they can project onto it whatever they want the future to look like. That’s why you’ll see utopian and dystopian scenarios debated seriously: some want the world to be so full that they don’t have to worry ever again about being outcasts, while others simply want to see it drown. In any case, they want to escape their current circumstance as underdogs.
The AI nerd does not say, like Spanish philosopher José Ortega y Gasset would, “I am I and my circumstance,” but “I am I after my circumstance.”
A complementary interpretation is that they’re motivated by envy. Sociologist Helmut Schoeck wrote that “The envious man does not so much want to have what is possessed by others as yearn for a state of affairs in which no one would enjoy the coveted object or style of life.” We’d normally take Schoeck’s words as hyperbole or perhaps as referring to a yearning that doesn’t happen in real life but, alas, AI nerds have the power to enact this state of affairs in which there exist so many things at the same time that someone else owning their object of envy (e.g., social acceptance or unpunished blitheness) is meaningless and/or unenjoyable in a chaotic world.
The AI nerd is not really moved by a desire to defeat scarcity or poverty but by a desire to satisfy this singular envy: If I don’t belong, no one does.
There’s a Nietzschean reading here as well: value no longer relies on scarcity (having something most don’t) but on abundance itself (that no one lacks anything is good, actually). Value is no longer attributed to displaying natural attunement to a stable world but to enduring a rapidly changing one. However, I don’t think AI nerds are trying to transmute any values in the Nietzschean sense. They are, instead, rendering the hierarchy of value itself obsolete. Whereas Nietzsche’s “weak” needed morality to rewrite the rules, the AI nerd needs only more compute, more layers, more scale, more machine, more of everything; an ontological revenge rather than a moral one.
When you can’t compete in taste, beauty, or social grace, you abolish the need for those hierarchies altogether by flooding the world with so many things that none of them can be ranked. To like everything, then, is not optimism but nihilistic retaliation.
You might still think this is too convoluted and psychologically coded, that money or power are still better explanations for the behavior of AI nerds. Well, here’s the thing: As opposed to the average normie, the average AI nerd is a millionaire (or millionaire-in-the-making). If they only wanted money, why not stop or do a less controversial job, like being a quant at Jane Street? If they only wanted power, why not join a lobbying firm in Washington or some big political party? Why, instead, do they work 100-hour weeks on building AI?
“A millionaire working 100-hour weeks???” Yes, they do. Standard normie reasoning breaks down for subcultures where obsession is virtue, or rather, where people have actual power to reshape their psychological landscape, a landscape so bad that no one with the power to change it would leave it unchanged. Not everyone can free themselves from a life of suffering by subsuming the world into chaos. AI nerds can. So they do.
To the AI nerd, money and power are proxies to enact what normies have naturally and are thus unable to appreciate: A world they’re attuned to.
AI nerds have spent their lives daydreaming about themselves being the hero of a tale no one wants to write. Despite this scenario of total chaos that I’m painting is only a remote possibility, they’ll do anything they can to let AI overhaul the world that put them last. For the first time in the history of the social loser, of the marginal weirdo—which is a long one, because the years are twice as slow when you suck—there’s a chance for reality to be “stranger than fiction,” as Mark Twain said.
How would they not lick their lips at the thought of it? How would they not behave as careless narcissists when this sort of picture unfolds before their eyes? The world wronged them, so they’ll wrong it back. In being alienated and even ostracized, they had the opportunity to see this coming, to help it come.
Kriss misses this part. AI nerds not so much like things just because they exist, as like to see the world subjected to a perpetual state of overexistence that either drowns us all in the sewers, including them, thus inflicting their individual nightmare on everyone else, or conversely, leaves them standing as the only type that knows how to navigate the rising tide.
If you asked them, they wouldn’t admit this. They may not even know! The brain is just so good at concealing our dark desires from us. That’s the tricky part, right: we all have dark desires, but very few the power to carry them out.
The further humanity strides from its origins—the AI nerd repeats to himself at night while the normies sleep—the more those unsuited for this normal world have a chance to fully exist. AI is simply the vehicle they’ve found that could manifest what looks to the rest of us like a delirium. Failing that, they will unapologetically repurpose this vehicle for something else: escape.
Subscribe to receive the second part—II. Escapism—in your inbox tomorrow. You can also claim your 33% discount offer on annual plans that I’m running for Halloween until November 3rd (Monday) by following the link below:



I like and follow most of your articles, but sorry, this one confuse me. You write "This is fully taken from personal observation and knowledge", thats ok. But IMHO the subtitle "On the psychology of AI nerds" is misleading. Nevertheless it is interesting to read about it.
Alberto, thank you for producing this post, it’s something we’ve needed. Your point about “the people behind AI” resonated with me, because we are (contrary to popular belief) people, not corporate overlords, billionaires or “tech‑bros” on some global conquest. We’re curious, problem‑solving, future‑looking individuals who believe in doing something meaningful with modern technology tools.
We don’t idolize AI, we haven’t drunk the Kool‑Aid, we aren’t the devil made manifest and we are not out to destroy the world. Five hundred years ago, today's “AI nerd” might have been a monk in a scriptorium, creating illuminated manuscripts for future minds. Today, we may be few in number, but we are normal, we see technology as a way to help advance the human condition, and a means by which we might play our part in the larger human project.
As Whitman put it; “That the powerful play goes on, and you may contribute a verse.” We the AI‑enthusiasts, users, experimenters and Devs are striving to contribute a verse to the ongoing story we all inhabit. Let us build. Let us teach. Let us learn. Debate us if you will but please, don’t vilify us.
Thanks again for framing this so precisely for the AI community. Here is another good post that I think pairs well with your post:
The "AI is a Bubble" Narrative is Stupid, Wrong, and Dangerous
https://substack.com/home/post/p-176395058
How the memes and surface level analysis are distracting from Nvidia's play to control the ecosystem.
by Devansh
@chocolatemilkcultleader
Oct 17, 2025