Alberto, thank you for producing this post, it’s something we’ve needed. Your point about “the people behind AI” resonated with me, because we are (contrary to popular belief) people, not corporate overlords, billionaires or “tech‑bros” on some global conquest. We’re curious, problem‑solving, future‑looking individuals who believe in doing something meaningful with modern technology tools.
We don’t idolize AI, we haven’t drunk the Kool‑Aid, we aren’t the devil made manifest and we are not out to destroy the world. Five hundred years ago, today's “AI nerd” might have been a monk in a scriptorium, creating illuminated manuscripts for future minds. Today, we may be few in number, but we are normal, we see technology as a way to help advance the human condition, and a means by which we might play our part in the larger human project.
As Whitman put it; “That the powerful play goes on, and you may contribute a verse.” We the AI‑enthusiasts, users, experimenters and Devs are striving to contribute a verse to the ongoing story we all inhabit. Let us build. Let us teach. Let us learn. Debate us if you will but please, don’t vilify us.
Thanks again for framing this so precisely for the AI community. Here is another good post that I think pairs well with your post:
The "AI is a Bubble" Narrative is Stupid, Wrong, and Dangerous
It just occurred to me that if AI Nerds work 100 hour weeks, that doesn’t leave a lot of time for them to be vocal about their motivations on twitter. Or really do much of anything besides study, eat, and optionally sleep and/or be in sunlight.
You might have set yourself a very challenging task, then, given the theoretical impossibility of access.
But you can still profile the people who are creating the narrative. “Revenge of the Nerds” was a Hollywood movie after all.
The same way a given Nerd might not be able to explain their psychological motivations for obsession, a given member of the Nerd community can often only extemporize about the motivations of the community at large based on perceived patterns after the fact. And then as a society, we don’t really seem to be directed by goals and objectives, we just seem to be good at coming up with theories for why things happen after the fact.
Jon Haidt calls this the “elephant and the rider” model of conscious, though his elephant is emotions and his rider, logic. He says the rider can’t really get the elephant to do anything it doesn’t want to, but can learn to apologize really well when it tramples someone’s backyard fence.
The AI nerds’ rider is their limited conscious awareness of their true intention.
The nerd communities’ rider is the subset of members who are chronically online. (Who this piece profiles.)
Society’s rider is a community of think piece writers like yourself Alberto.
The elephant driving it all? I don’t know.
But I’m embarrassed to admit that the argument that the generation of more wealth by whatever means with be “the rising tide that lifts all boats” including the “cancer research” boat is actually a little compelling. I had never heard that take before.
Maybe the elephant is something like that. The unknowable instincts of the species to create the conditions for life to, with plenty of trade-offs, thrive.
Hmm disagree with the premise! The people I'm referring to here are not chronically online. Those who are, aren't AI nerds in the sense I'm using, just LARPers or influencers and such (not everyone is working 100h/week though). But from time to time, those not chronically online also emerge from the shadows in demos, announcements, launches, and do a bit of PR, or are profiled by some notable magazine or I read about them from second hand sources or, simply, I know them myself. Let's not forget that I'm partly one haha. This is not the kind of insight you get by scrolling Twitter.
At the same time, you are correct in an important sense: whatever I've written here, I have no access to the actual motivations of these people. I merely infer them from their circumstances and what I know about them, their *expressed* goals, motivations, etc. (They also don't have access to the real ones) and my limited understanding of human psychology (being partially an outsider allows me to see things about themselves they can't accept or even perceive).
My worst sin here, and I'm fully guilty of this, is the clumsy generalization about an ill-defined concept as is "AI nerds". But, alas, there's no other way that I know of of doing this! (I could have abstained and that's a completely fair criticism.)
Finally, about the rising tide, I simply don't think it's true in the sense they're using it. They're abusing an idea way beyond its scope as it was originally conceived. Thinking that making Sora videos will lead to a cancer cure is taking us for fools. It's true in the trivial sense that all roads lead to Rome but in the more important sense of "is this the best/more efficient path," I'm quite convinced they're wrong. For one simple reason: they're treating whatever wonders happen after AGI as a byproduct of solving AGI, which is not a given.
And if it doesn't happen, they will be content with the next best thing (to them) which is, of course, just a lot of Sora videos.
"They like everything that exists" strikes me in an additional way - they despite what doesn't "exist", the classic things being God, the soul, but also anything that is emotional or social. Not a coincidence that their technology reduces our imaginative spiritual ability, bringing everyone down their level. They are all aphantasic of course.
I like and follow most of your articles, but sorry, this one confuse me. You write "This is fully taken from personal observation and knowledge", thats ok. But IMHO the subtitle "On the psychology of AI nerds" is misleading. Nevertheless it is interesting to read about it.
I was trying to say that the subtitle confused me or that it was not very clear for me. My expectation was that the post was really about a psychology subject, but I later find out in the disclaimer ("This is not a treatise on the full psychological profile of the AI nerd") that it was not the case.
What I mean is that I'm focused on the uglier parts of the AI nerd demographic vs the full psychological profile. However, I think the confusion comes from the fact that you are a professional on the psychology space if I remember correctly and I'm using "psychology" here much more liberally than I would in a paper.
BTW, let me recommend a book to all that coincidentally I´m currently reading and intersect somewhat with the subject or your post. " Luca Possati - Unconscious Networks: Philosophy, Psychoanalysis, and Artificial Intelligence.”, it proposes an approach called "technoanalysis" which analyzes AI based on Freudian psychoanalysis, biosemiotics, and Latour’s actor-network theory. Recommended to anyone who want to go deeper on this matter.
I was under the impression that Freudian psychoanalysis had been largely discredited as a scientific theory in mainstream psychology and psychiatry. Is Freud still relevant?
While it is true that Freudian psychoanalysis has long been questioned and discredited, it is still particularly relevant today in the fields of neuroscience and artificial intelligence under the new post-positivist epistemological criteria. Some examples: the work of Mark Solms in the interdisciplinary field of neuropsychoanalysis, has made important discoveries in neuroscience that support some of the fundamental principles of psychoanalysis. Furthermore, work such as that of Luca Posatti hypothesizes that artificial intelligence algorithms possess a kind of "unconscious" resulting from projections of our own unconscious, transferred by our data, rigorously supporting it with scientific criteria. As a practice of psychodynamic analysis and therapy, psychoanalysis remains relevant in certain regions of the world, particularly in Argentina, France, UK and NY(USA). There is ongoing support and research in global and regional institutions like IPA and APA(USA). It is certainly a highly debatable topic subject to criticism and rejection, but despite this, psychoanalysis continues to be relevant today in some fields and specific areas.
Yes, you're right. Anyway in this post you propose very interesting points, that are worth to consider from a brand new fresh and provocative perspective.
To the AI nerd, the fact that something is possible is enough justification for its existence. They display an undiscerning worldview: creation without evaluation, critique, judgment, reflection, or restraint.
I may be a bit guilty here. When a new tech comes out there is an excitement that just takes over. Can’t help it!
An interesting theme for your posts. And I broadly concur. But let’s be clear - what you’re describing is creativity. And creativity keeps bifurcating. It never locks in to a standard mode because as soon as that mode becomes normalised it’s boring. I recently visited a Damon Albarn / Guerillas installation which kind of illustrates this point. At the height of his success (with Blur) Albarn walked away & reconstructed his ideas from the bottom up. This is true of any genuinely creative individual (see also: David Bowie). I am a little older than you & saw the first generation of internet / digital nerds as they left my university & walked into their first jobs. My most brilliant friend, a 16 year old Croatian engineering student never had a job. He designed some things, made a couple of million & moved to Hawaii. Another of my brilliant friends went into finance. With his money he set up a (beautiful) menswear line. It failed. So back to Quant, extreme wealth & 20 year old girlfriends. I suppose what I’m trying to say is that the missing piece here is the explosion of money pumped into this space. Without that money supply these men would be mid level management consultants or crusty old Maths professors (another friend from that era). Very, very, very few are genuinely creative in the sense that they burn their creation & start the whole journey again (Steve Jobs was this kind of person). Most smart people are fairly creative & like puzzle solving. That’s a threshold that many can reach. But most smart people also run out of ideas & use their power & prestige to leverage capital that could better be spent elsewhere. I’m very opposed to eulogising these men given that their breakthroughs are really built on the back of Turing / Herbert Simon’s genuinely creative ideas. And the truly brilliant among them - let’s say, Stephen Wolfram, make their money & then go & rewrite science from the bottom up. This is really my point. Problem solving is fun. All clever people like to do it - whether they’re building AI models or analysing historical shifts. But truly creative people are always seeking novelty & they’ll walk a dark path to find it.
Everything you say feels accurate to me. I just disagree in that I don't mean creativity, just creation, in the ontological sense. It doesn't matter to them if it's new, novel, original, etc. That's why the fact that LLMs are repetitive within themselves and with respect to the data they've been trained on is mostly *irrelevant* to the AI nerd!
This analysis of AI nerd psychology is fascinaing - particularly the point about 'overexistence' as a form of revenge. The observation that AI nerds enjoy creation for its own sake, regardless of value or impact, really captures something I've noticed in the discourse. The Nietzschean reading is especially compelling: instead of transvaluing values, AI nerds simply flood the world with so much that value hierarchies become meaningless. It's a sobering reminder that we need to question not just the 'what' of AI development, but the 'why' behind the relentless pursuit of capability without consideration of consequence.
This seems to (attempt to) describe a small number of very online, very vocal AI(ish) proponents, and is nothing like the thoughtful, caring, often niche-obsessed people I work with at DeepMind and have mentored (at DeepMind, Apple, Meta, Microsoft and elsewhere).
(It’s a distorted view, just as in politics it seems like most people are far right or far left when you consider online chatter, but in fact most people are actually moderates.)
To answer the "distorted view" part: I think the most vocal people have an overwhelming effect on how things are, not just how they are perceived. In my view, the problem with the "majority of shy moderates" idea is that they don't hold much power despite being the majority. Perhaps if AI was something you voted for then yes. But I do believe that a small number of people hold a disproportionate amount of influence.
It's mostly Silicon Valley-ish so DeepMind is rather out of that bubble AFAIK (although I wouldn't be surprised if some people at DM fell under this category). I wrote some reasonable caveats in the introduction.
I understand if it annoys some people who feel they could fall under the label but I think the public deserves to know who is making these things and the underlying motivations and thinking patterns that move them. Apparently, they hate AI people and are afraid of AI. Apparently, the hate/fear is spreading faster than the adoption (e.g. latest Pew research and others).
That said, I may have missed the mark on this one. Even if I'm convinced it's possible to do well something like this - a psychological analysis of a group of people defined by some behavioral traits, etc. - I may have done a poor job.
(For instance, I could have specified better what "AI nerd" encapsulates because, now that you mention it, I actually thought while writing this that the DeepMind people I know doesn't really fit the description)
Yeah, fair enough — as Andrew Bird’s line goes in his song Sisyphus: “History forgets the moderates.” I sometimes think to be more vocal, but then I’d rather build models, work with artists and filmmakers, etc. The loud stuff just comes across as self serving, mostly.
It’s fine that you had a go at an essay like this! I’m just caveating it further as it definitely doesn’t look like the AI folks I personally know and work with. I admit as a former Austinite who has recently moved to London that I do not have the best pulse check on general silicon valley vibes. (Though I know plenty of people who live and work there, and they aren’t strident AI-nerds as described here.)
Yeah, I didn't really think about the moderates when writing this (which is itself a symptom of the perceived distortion slowly becoming a distorted reality), but you're probably correct. Even in SV, there are probably many more people not saying anything (just doing their thing) than making some noise online or elsewhere.
But then I wonder: why are the moderates letting the loud ones completely define how AI is perceived? Isn't there value in trying to re-shift the conversation? I understand your reluctance (I'm also not very loud, I feel it's a personality trait I lack), but I feel that saying nothing and then going on building this thing that others will steer for their purposes (and that the fraction of the public that considers it sufficiently important to speak up dislikes), is a kind of acceptance by detachment.
The third part kinda talks about this. About the tendency of AI people toward being "apolitical"
Yeah, I think part of it is just not wanting to add to the noise with just more noise. As I work on generative AI, I’ve been trying to do meaningful acts of collaboration with artists, for example, such as this project I did with London artist Ben Cullen-Williams:
You assign maybe-subconscious motivations of vengefulness, pettiness, and cruelty with very little evidence. It's a lot of mind-reading, a lot of assumptions, when a much simpler explanation presents itself.
AI bigwigs bet big on LLMs and got high on their own supply. They ignored all the evidence that LLMs had inherent limits and convinced themselves that they were just on the cusp of transcending the banality inherent to being human. The kind of story we've been retelling since the Epic of Gilgamesh.
Now the honeymoon's over, reality's setting in, and they're hoping the slop can pay their bills and keep them from becoming a case study in an economics textbook. These aren't vindictive villains in some morality play... just people who really are not as clever as they think themselves.
Both things can be true at once. I've also written about that. But this is a qualitative psychological analysis. There's no "evidence" for something like this. (I warned about this in the intro.) Should the fact that there isn't "evidence" stop someone from writing on this? I don't think so - I think it's an important piece of the puzzle. Besides, if we only waited to have "evidence" to write about things - nothing would ever be written!!
I just want to be sure you're not just creating a person-guy (https://freddiedeboer.substack.com/p/planet-of-person-guys), because the internet has enough of those essays wasting HDD space, and every person-guy essay could also call itself a "qualitative psychological analysis".
Oh, I read that one a while back and actually followed a few of the links (to Sam Kriss's for instance). I think it's a fine line to thread and I may fail. It's ok.
1) I don't claim to be right; I'm just doing what essays should do: explore. (Even better if I learn in the process.)
2) I wrote in the intro that the "AI nerd" label is a statistical fiction and that no real person has a 100% overlap with the stereotype that I'm describing here.
3) My goal, in case it wasn't clear (maybe it wasn't), is to open the category of AI people as more than "greedy idiots" because that's simply untrue.
Hope that clarified things. (Note that I wouldn't write an essay like this if I thought it was either unnecessary or unimportant.)
I like that. I'm not fully convinced myself. Trying to figure out things that are maybe too hard or that simply don't exist. But I'm *less* convinced by the money/power story or "it got out of hand" (I think that's true but incomplete)
This piece offers a fasinating perspective on the psychological underpinnings of AI development. Your observation about AI nerds embracing 'overexistence' really resonates - the tendency to engage with everything, from philosophical debates to technical minutiae, does seem central to the culture. I appreciate how you're trying to move beyond the simplistic 'greedy tech bro' narrative while still examining the less comfortable aspects of the community's psychology. Looking forward to the remaining parts of this series.
This essay captures something important about the relationship between technology enthusiasts and broader society. The observation about AI nerds loving 'everything that exists' is particularly intresting - it suggests a kind of radical empiricism that values reality over abstraction. I wonder if this perspective might actually be more grounded than the critics realize, even if the methods and approaches remain contentious. The tension you describe between building for abundance and the public's growing skepticism is worth exploring further.
A more simple explanation is humans are driven to seek status (see That Status Game) and AI research is top area for many folks to achieve this. Is the natural evolution of our economy becoming more knowledge work centered.
I disagree with the notion that many people were bullied and at the “bottom of the rung” growing up, and they can achieve some time of redemption (if I understand your argument correctly). Even from a young age being good in school gives children high status, particularly more so in many communities.
Agreed. Except for one important thing: most AI nerds were in AI waaaay before it had status. Way before ChatGPT was even possible. It has status now, not always did!
IMHO, it’s OK to pursue things for status, but you can’t also reap the benefits of “saving the world”, creating abundance, etc. it’s like the joke in the show Silicon Valley where every startup is “making the world a better place”
PS. For me personally, I got in it early because it was clearly gonna be future proof career wise and I found it really fun to see next level automation.
This is quite interesting: "AI is simply the vehicle they’ve found that could manifest what looks to the rest of us like a delirium. Failing that, they will unapologetically repurpose this vehicle for something else: escape."
However, the framing of 'escape' as being from a escape from the 'bullying normie world' I think misses the more pure 'escape' at work. (It's a strange twist to diagnose nerds' self-identification as "IMMORTAL, OMNIPOTENT GODS!" who love all that exists! and then claim that those so-deluded are also envious of and vengeful toward the distracted normies.)
Might the self-deluded-nerd-gods be more interested in escape from the puzzle of reality that they lovingly obsess over, rather than escape from the cool kids? Physics is a cruel mistress. As is Death. There's no hint of vengence in Ilya's "Feel the AGI" but there are big hits of transcendence.
Hey Harold - but aren't the gods the most vengeful of all?? (That part about the omnipotent gods is not a diagnosis but a counterpoint to the fact that no one can possibly rest their love for abundance on "in the distant future we will have solved all problems!" That only makes sense for one to say passionately if one is a god.)
That said, yes. You are correct. And that's precisely what they want to escape. The second part is exactly about what you say. The vengeance/envy stops in the first part. The second is a transcendental form of escaping. (The ugliest part of this essay was this first part, but I couldn't avoid talking about mundane envy; many of them do feel mundane envy.)
I've met many envious nerds, for sure (far fewer vengeful nerds) but I didn't feel like the mundane envy (or vengence) was essential to the (AI) nerdiness of them in the way that you've captured the essence of (AI) nerdiness in the transcendence parts.
To put it otherwise, I don't think a bodhisattva nerd who delays transcendence for the benevolent purpose of rapturing the normies ... is a contradiction
"benevolent purpose of rapturing the normies" feels like an oxymoron to me... I don't think normies want to be raptured! But I agree that some nerds convince themselves that this is a good deed and for the greater good. After all, another psychological drive is seeing ourselves as good people.
Alberto, thank you for producing this post, it’s something we’ve needed. Your point about “the people behind AI” resonated with me, because we are (contrary to popular belief) people, not corporate overlords, billionaires or “tech‑bros” on some global conquest. We’re curious, problem‑solving, future‑looking individuals who believe in doing something meaningful with modern technology tools.
We don’t idolize AI, we haven’t drunk the Kool‑Aid, we aren’t the devil made manifest and we are not out to destroy the world. Five hundred years ago, today's “AI nerd” might have been a monk in a scriptorium, creating illuminated manuscripts for future minds. Today, we may be few in number, but we are normal, we see technology as a way to help advance the human condition, and a means by which we might play our part in the larger human project.
As Whitman put it; “That the powerful play goes on, and you may contribute a verse.” We the AI‑enthusiasts, users, experimenters and Devs are striving to contribute a verse to the ongoing story we all inhabit. Let us build. Let us teach. Let us learn. Debate us if you will but please, don’t vilify us.
Thanks again for framing this so precisely for the AI community. Here is another good post that I think pairs well with your post:
The "AI is a Bubble" Narrative is Stupid, Wrong, and Dangerous
https://substack.com/home/post/p-176395058
How the memes and surface level analysis are distracting from Nvidia's play to control the ecosystem.
by Devansh
@chocolatemilkcultleader
Oct 17, 2025
It just occurred to me that if AI Nerds work 100 hour weeks, that doesn’t leave a lot of time for them to be vocal about their motivations on twitter. Or really do much of anything besides study, eat, and optionally sleep and/or be in sunlight.
You might have set yourself a very challenging task, then, given the theoretical impossibility of access.
But you can still profile the people who are creating the narrative. “Revenge of the Nerds” was a Hollywood movie after all.
The same way a given Nerd might not be able to explain their psychological motivations for obsession, a given member of the Nerd community can often only extemporize about the motivations of the community at large based on perceived patterns after the fact. And then as a society, we don’t really seem to be directed by goals and objectives, we just seem to be good at coming up with theories for why things happen after the fact.
Jon Haidt calls this the “elephant and the rider” model of conscious, though his elephant is emotions and his rider, logic. He says the rider can’t really get the elephant to do anything it doesn’t want to, but can learn to apologize really well when it tramples someone’s backyard fence.
The AI nerds’ rider is their limited conscious awareness of their true intention.
The nerd communities’ rider is the subset of members who are chronically online. (Who this piece profiles.)
Society’s rider is a community of think piece writers like yourself Alberto.
The elephant driving it all? I don’t know.
But I’m embarrassed to admit that the argument that the generation of more wealth by whatever means with be “the rising tide that lifts all boats” including the “cancer research” boat is actually a little compelling. I had never heard that take before.
Maybe the elephant is something like that. The unknowable instincts of the species to create the conditions for life to, with plenty of trade-offs, thrive.
Hmm disagree with the premise! The people I'm referring to here are not chronically online. Those who are, aren't AI nerds in the sense I'm using, just LARPers or influencers and such (not everyone is working 100h/week though). But from time to time, those not chronically online also emerge from the shadows in demos, announcements, launches, and do a bit of PR, or are profiled by some notable magazine or I read about them from second hand sources or, simply, I know them myself. Let's not forget that I'm partly one haha. This is not the kind of insight you get by scrolling Twitter.
At the same time, you are correct in an important sense: whatever I've written here, I have no access to the actual motivations of these people. I merely infer them from their circumstances and what I know about them, their *expressed* goals, motivations, etc. (They also don't have access to the real ones) and my limited understanding of human psychology (being partially an outsider allows me to see things about themselves they can't accept or even perceive).
My worst sin here, and I'm fully guilty of this, is the clumsy generalization about an ill-defined concept as is "AI nerds". But, alas, there's no other way that I know of of doing this! (I could have abstained and that's a completely fair criticism.)
Finally, about the rising tide, I simply don't think it's true in the sense they're using it. They're abusing an idea way beyond its scope as it was originally conceived. Thinking that making Sora videos will lead to a cancer cure is taking us for fools. It's true in the trivial sense that all roads lead to Rome but in the more important sense of "is this the best/more efficient path," I'm quite convinced they're wrong. For one simple reason: they're treating whatever wonders happen after AGI as a byproduct of solving AGI, which is not a given.
And if it doesn't happen, they will be content with the next best thing (to them) which is, of course, just a lot of Sora videos.
"They like everything that exists" strikes me in an additional way - they despite what doesn't "exist", the classic things being God, the soul, but also anything that is emotional or social. Not a coincidence that their technology reduces our imaginative spiritual ability, bringing everyone down their level. They are all aphantasic of course.
Certainly! Didn't mention this but it's true as well - they think of AI as a anything-maker but also as an everything-maker!
I like and follow most of your articles, but sorry, this one confuse me. You write "This is fully taken from personal observation and knowledge", thats ok. But IMHO the subtitle "On the psychology of AI nerds" is misleading. Nevertheless it is interesting to read about it.
Hi Ricardo, thanks for the input. Why do you say "misleading"? I admit this is a risky essay to write
I was trying to say that the subtitle confused me or that it was not very clear for me. My expectation was that the post was really about a psychology subject, but I later find out in the disclaimer ("This is not a treatise on the full psychological profile of the AI nerd") that it was not the case.
What I mean is that I'm focused on the uglier parts of the AI nerd demographic vs the full psychological profile. However, I think the confusion comes from the fact that you are a professional on the psychology space if I remember correctly and I'm using "psychology" here much more liberally than I would in a paper.
BTW, let me recommend a book to all that coincidentally I´m currently reading and intersect somewhat with the subject or your post. " Luca Possati - Unconscious Networks: Philosophy, Psychoanalysis, and Artificial Intelligence.”, it proposes an approach called "technoanalysis" which analyzes AI based on Freudian psychoanalysis, biosemiotics, and Latour’s actor-network theory. Recommended to anyone who want to go deeper on this matter.
I was under the impression that Freudian psychoanalysis had been largely discredited as a scientific theory in mainstream psychology and psychiatry. Is Freud still relevant?
While it is true that Freudian psychoanalysis has long been questioned and discredited, it is still particularly relevant today in the fields of neuroscience and artificial intelligence under the new post-positivist epistemological criteria. Some examples: the work of Mark Solms in the interdisciplinary field of neuropsychoanalysis, has made important discoveries in neuroscience that support some of the fundamental principles of psychoanalysis. Furthermore, work such as that of Luca Posatti hypothesizes that artificial intelligence algorithms possess a kind of "unconscious" resulting from projections of our own unconscious, transferred by our data, rigorously supporting it with scientific criteria. As a practice of psychodynamic analysis and therapy, psychoanalysis remains relevant in certain regions of the world, particularly in Argentina, France, UK and NY(USA). There is ongoing support and research in global and regional institutions like IPA and APA(USA). It is certainly a highly debatable topic subject to criticism and rejection, but despite this, psychoanalysis continues to be relevant today in some fields and specific areas.
Yes, you're right. Anyway in this post you propose very interesting points, that are worth to consider from a brand new fresh and provocative perspective.
Thank you, Ricardo!
To the AI nerd, the fact that something is possible is enough justification for its existence. They display an undiscerning worldview: creation without evaluation, critique, judgment, reflection, or restraint.
I may be a bit guilty here. When a new tech comes out there is an excitement that just takes over. Can’t help it!
An interesting theme for your posts. And I broadly concur. But let’s be clear - what you’re describing is creativity. And creativity keeps bifurcating. It never locks in to a standard mode because as soon as that mode becomes normalised it’s boring. I recently visited a Damon Albarn / Guerillas installation which kind of illustrates this point. At the height of his success (with Blur) Albarn walked away & reconstructed his ideas from the bottom up. This is true of any genuinely creative individual (see also: David Bowie). I am a little older than you & saw the first generation of internet / digital nerds as they left my university & walked into their first jobs. My most brilliant friend, a 16 year old Croatian engineering student never had a job. He designed some things, made a couple of million & moved to Hawaii. Another of my brilliant friends went into finance. With his money he set up a (beautiful) menswear line. It failed. So back to Quant, extreme wealth & 20 year old girlfriends. I suppose what I’m trying to say is that the missing piece here is the explosion of money pumped into this space. Without that money supply these men would be mid level management consultants or crusty old Maths professors (another friend from that era). Very, very, very few are genuinely creative in the sense that they burn their creation & start the whole journey again (Steve Jobs was this kind of person). Most smart people are fairly creative & like puzzle solving. That’s a threshold that many can reach. But most smart people also run out of ideas & use their power & prestige to leverage capital that could better be spent elsewhere. I’m very opposed to eulogising these men given that their breakthroughs are really built on the back of Turing / Herbert Simon’s genuinely creative ideas. And the truly brilliant among them - let’s say, Stephen Wolfram, make their money & then go & rewrite science from the bottom up. This is really my point. Problem solving is fun. All clever people like to do it - whether they’re building AI models or analysing historical shifts. But truly creative people are always seeking novelty & they’ll walk a dark path to find it.
Everything you say feels accurate to me. I just disagree in that I don't mean creativity, just creation, in the ontological sense. It doesn't matter to them if it's new, novel, original, etc. That's why the fact that LLMs are repetitive within themselves and with respect to the data they've been trained on is mostly *irrelevant* to the AI nerd!
This analysis of AI nerd psychology is fascinaing - particularly the point about 'overexistence' as a form of revenge. The observation that AI nerds enjoy creation for its own sake, regardless of value or impact, really captures something I've noticed in the discourse. The Nietzschean reading is especially compelling: instead of transvaluing values, AI nerds simply flood the world with so much that value hierarchies become meaningless. It's a sobering reminder that we need to question not just the 'what' of AI development, but the 'why' behind the relentless pursuit of capability without consideration of consequence.
Exactly - and the who!
This seems to (attempt to) describe a small number of very online, very vocal AI(ish) proponents, and is nothing like the thoughtful, caring, often niche-obsessed people I work with at DeepMind and have mentored (at DeepMind, Apple, Meta, Microsoft and elsewhere).
(It’s a distorted view, just as in politics it seems like most people are far right or far left when you consider online chatter, but in fact most people are actually moderates.)
To answer the "distorted view" part: I think the most vocal people have an overwhelming effect on how things are, not just how they are perceived. In my view, the problem with the "majority of shy moderates" idea is that they don't hold much power despite being the majority. Perhaps if AI was something you voted for then yes. But I do believe that a small number of people hold a disproportionate amount of influence.
It's mostly Silicon Valley-ish so DeepMind is rather out of that bubble AFAIK (although I wouldn't be surprised if some people at DM fell under this category). I wrote some reasonable caveats in the introduction.
I understand if it annoys some people who feel they could fall under the label but I think the public deserves to know who is making these things and the underlying motivations and thinking patterns that move them. Apparently, they hate AI people and are afraid of AI. Apparently, the hate/fear is spreading faster than the adoption (e.g. latest Pew research and others).
That said, I may have missed the mark on this one. Even if I'm convinced it's possible to do well something like this - a psychological analysis of a group of people defined by some behavioral traits, etc. - I may have done a poor job.
(For instance, I could have specified better what "AI nerd" encapsulates because, now that you mention it, I actually thought while writing this that the DeepMind people I know doesn't really fit the description)
Yeah, fair enough — as Andrew Bird’s line goes in his song Sisyphus: “History forgets the moderates.” I sometimes think to be more vocal, but then I’d rather build models, work with artists and filmmakers, etc. The loud stuff just comes across as self serving, mostly.
It’s fine that you had a go at an essay like this! I’m just caveating it further as it definitely doesn’t look like the AI folks I personally know and work with. I admit as a former Austinite who has recently moved to London that I do not have the best pulse check on general silicon valley vibes. (Though I know plenty of people who live and work there, and they aren’t strident AI-nerds as described here.)
Yeah, I didn't really think about the moderates when writing this (which is itself a symptom of the perceived distortion slowly becoming a distorted reality), but you're probably correct. Even in SV, there are probably many more people not saying anything (just doing their thing) than making some noise online or elsewhere.
But then I wonder: why are the moderates letting the loud ones completely define how AI is perceived? Isn't there value in trying to re-shift the conversation? I understand your reluctance (I'm also not very loud, I feel it's a personality trait I lack), but I feel that saying nothing and then going on building this thing that others will steer for their purposes (and that the fraction of the public that considers it sufficiently important to speak up dislikes), is a kind of acceptance by detachment.
The third part kinda talks about this. About the tendency of AI people toward being "apolitical"
Yeah, I think part of it is just not wanting to add to the noise with just more noise. As I work on generative AI, I’ve been trying to do meaningful acts of collaboration with artists, for example, such as this project I did with London artist Ben Cullen-Williams:
https://artsandculture.google.com/story/self-portrait-london-design-festival/TAWh4_-xNanz3g?hl=en
But this doesn’t go “viral” even though I think it is elegant, beautiful and deep. So, it hasn’t gotten much attention from the online chateratti.
You assign maybe-subconscious motivations of vengefulness, pettiness, and cruelty with very little evidence. It's a lot of mind-reading, a lot of assumptions, when a much simpler explanation presents itself.
AI bigwigs bet big on LLMs and got high on their own supply. They ignored all the evidence that LLMs had inherent limits and convinced themselves that they were just on the cusp of transcending the banality inherent to being human. The kind of story we've been retelling since the Epic of Gilgamesh.
Now the honeymoon's over, reality's setting in, and they're hoping the slop can pay their bills and keep them from becoming a case study in an economics textbook. These aren't vindictive villains in some morality play... just people who really are not as clever as they think themselves.
Both things can be true at once. I've also written about that. But this is a qualitative psychological analysis. There's no "evidence" for something like this. (I warned about this in the intro.) Should the fact that there isn't "evidence" stop someone from writing on this? I don't think so - I think it's an important piece of the puzzle. Besides, if we only waited to have "evidence" to write about things - nothing would ever be written!!
I just want to be sure you're not just creating a person-guy (https://freddiedeboer.substack.com/p/planet-of-person-guys), because the internet has enough of those essays wasting HDD space, and every person-guy essay could also call itself a "qualitative psychological analysis".
Oh, I read that one a while back and actually followed a few of the links (to Sam Kriss's for instance). I think it's a fine line to thread and I may fail. It's ok.
1) I don't claim to be right; I'm just doing what essays should do: explore. (Even better if I learn in the process.)
2) I wrote in the intro that the "AI nerd" label is a statistical fiction and that no real person has a 100% overlap with the stereotype that I'm describing here.
3) My goal, in case it wasn't clear (maybe it wasn't), is to open the category of AI people as more than "greedy idiots" because that's simply untrue.
Hope that clarified things. (Note that I wouldn't write an essay like this if I thought it was either unnecessary or unimportant.)
Fair enough. Still not convinced by your thesis, but at least those concerns of mine are addressed.
I like that. I'm not fully convinced myself. Trying to figure out things that are maybe too hard or that simply don't exist. But I'm *less* convinced by the money/power story or "it got out of hand" (I think that's true but incomplete)
This attitude is not allowed here.
This piece offers a fasinating perspective on the psychological underpinnings of AI development. Your observation about AI nerds embracing 'overexistence' really resonates - the tendency to engage with everything, from philosophical debates to technical minutiae, does seem central to the culture. I appreciate how you're trying to move beyond the simplistic 'greedy tech bro' narrative while still examining the less comfortable aspects of the community's psychology. Looking forward to the remaining parts of this series.
This essay captures something important about the relationship between technology enthusiasts and broader society. The observation about AI nerds loving 'everything that exists' is particularly intresting - it suggests a kind of radical empiricism that values reality over abstraction. I wonder if this perspective might actually be more grounded than the critics realize, even if the methods and approaches remain contentious. The tension you describe between building for abundance and the public's growing skepticism is worth exploring further.
Interesting and thought provoking.
A more simple explanation is humans are driven to seek status (see That Status Game) and AI research is top area for many folks to achieve this. Is the natural evolution of our economy becoming more knowledge work centered.
I disagree with the notion that many people were bullied and at the “bottom of the rung” growing up, and they can achieve some time of redemption (if I understand your argument correctly). Even from a young age being good in school gives children high status, particularly more so in many communities.
Agreed. Except for one important thing: most AI nerds were in AI waaaay before it had status. Way before ChatGPT was even possible. It has status now, not always did!
IMHO, it’s OK to pursue things for status, but you can’t also reap the benefits of “saving the world”, creating abundance, etc. it’s like the joke in the show Silicon Valley where every startup is “making the world a better place”
PS. For me personally, I got in it early because it was clearly gonna be future proof career wise and I found it really fun to see next level automation.
This is quite interesting: "AI is simply the vehicle they’ve found that could manifest what looks to the rest of us like a delirium. Failing that, they will unapologetically repurpose this vehicle for something else: escape."
However, the framing of 'escape' as being from a escape from the 'bullying normie world' I think misses the more pure 'escape' at work. (It's a strange twist to diagnose nerds' self-identification as "IMMORTAL, OMNIPOTENT GODS!" who love all that exists! and then claim that those so-deluded are also envious of and vengeful toward the distracted normies.)
Might the self-deluded-nerd-gods be more interested in escape from the puzzle of reality that they lovingly obsess over, rather than escape from the cool kids? Physics is a cruel mistress. As is Death. There's no hint of vengence in Ilya's "Feel the AGI" but there are big hits of transcendence.
Hey Harold - but aren't the gods the most vengeful of all?? (That part about the omnipotent gods is not a diagnosis but a counterpoint to the fact that no one can possibly rest their love for abundance on "in the distant future we will have solved all problems!" That only makes sense for one to say passionately if one is a god.)
That said, yes. You are correct. And that's precisely what they want to escape. The second part is exactly about what you say. The vengeance/envy stops in the first part. The second is a transcendental form of escaping. (The ugliest part of this essay was this first part, but I couldn't avoid talking about mundane envy; many of them do feel mundane envy.)
I've met many envious nerds, for sure (far fewer vengeful nerds) but I didn't feel like the mundane envy (or vengence) was essential to the (AI) nerdiness of them in the way that you've captured the essence of (AI) nerdiness in the transcendence parts.
To put it otherwise, I don't think a bodhisattva nerd who delays transcendence for the benevolent purpose of rapturing the normies ... is a contradiction
"benevolent purpose of rapturing the normies" feels like an oxymoron to me... I don't think normies want to be raptured! But I agree that some nerds convince themselves that this is a good deed and for the greater good. After all, another psychological drive is seeing ourselves as good people.
Pagans don't want to be Christians until they do...
But I'm just sayin' that a nerd without envy toward the normies can still be a nerd
Oh, sure. The envy is only part of the picture (I just happened to put it in the first section), so not a requirement.