On the Psychology of AI People
To them, we face the existence of 'mysterious creatures' not an AI bubble
I.
Jack Clark, co-founder and Head of Policy at Anthropic, one of the top AI labs, wrote this passage I quote below for the latest issue of the Import AI newsletter (these are also the first paragraphs of his speech at The Curve conference in Berkeley that took place during the first week of October 2025):
I remember being a child and after the lights turned out I would look around my bedroom and I would see shapes in the darkness and I would become afraid – afraid these shapes were creatures I did not understand that wanted to do me harm. And so I’d turn my light on. And when I turned the light on I would be relieved because the creatures turned out to be a pile of clothes on a chair, or a bookshelf, or a lampshade.
Now, in the year of 2025, we are the child from that story and the room is our planet. But when we turn the light on we find ourselves gazing upon true creatures, in the form of the powerful and somewhat unpredictable AI systems of today and those that are to come. And there are many people who desperately want to believe that these creatures are nothing but a pile of clothes on a chair, or a bookshelf, or a lampshade. And they want to get us to turn the light off and go back to sleep.
In fact, some people are even spending tremendous amounts of money to convince you of this – that’s not an artificial intelligence about to go into a hard takeoff, it’s just a tool that will be put to work in our economy. It’s just a machine, and machines are things we master.
But make no mistake: what we are dealing with is a real and mysterious creature, not a simple and predictable machine.
What Clark says is not rare among the people inside the AI labs (Google DeepMind, OpenAI, xAI, etc.) but the norm.
They have been warning for weeks (years, actually, but I want to get specific) that AI people’s views drastically and increasingly diverge from those of the general public. They say that while the world talks about a bubble (hey, that’s me!), they’re talking “endgames.” That we’re not “emotionally prepared” for the possibility that the bubble is, instead, well, not a bubble.
Clark says we are dealing with real creatures that move in the darkness, not piles of clothes that prove harmless and inconsequential once we turn on the lights (I think the metaphor is ok). AI, they say, is not a mere tool or machine but more.
(I won’t enter into the debate of whether that “more” is good or bad for us, but focus instead on the belief that AI is “more” in the first place.)
Faced with statements like Clark’s coming from the entire crowd of people building AI, we have three interpretive choices besides ignoring it, which is what most people do, but also not particularly clever, if not bordering on negligent. The first interpretation (1) is the next most popular, and also the next laziest: dismiss these words as marketing or intended for a similarly (and easily understood) mundane goal.
In this view, AI people are dishonest due to vested motivations and will say whatever to get investors’ money, to grab the public’s attention, to attract eyes from the government, and even, given that the statements become weirder by the day, perhaps to lure in like-minded talent at the expense of the competition.
To the extent one believes that Clark’s words might be misleading to the general public, one must conclude that AI people are both wrong and bad. The choice is up to you, and it’s fine if you take it; we disagree, and you are free to stop reading now because what follows will be uninteresting to you.
I do believe there’s part of marketing and trying to grab talent and investment and whatnot in what they do and say (in all business deals, for that matter), but assuming that’s the only reason (or the main one) behind these statements is, to me, deeply unserious.
(Some people are LARPers, which means they act as if they believe in these “mysterious creatures” when they don’t, but I’m not talking about them here.)
II.
I see two alternatives to indifference and cynicism.
The second (2) interpretation is simply that they are correct, making the truth a hard pill to swallow. People like Jack Clark and the audience at The Curve and AI labs’ staff know more than we do because they work closely with AI and trade with better, richer, and updated private information.
When you hear statements like “make no mistake: what we are dealing with is a real and mysterious creature, not a simple and predictable machine,” you’re glimpsing the normal reaction to that abnormal knowledge.
In this view, AI people are just like the general public, except in that they live under conditions that would turn a convinced atheist into a polytheist, or a science absolutist into a spirituality hippie. To us, they live inside an incommensurable reality. Thus, Clark’s intention to convey the message to the broader world is no different than the people who witnessed the Miracle of the Sun of Fatima in 1917, trying to warn the world that the sun is falling into Earth and will burn us all.
AI people exist closer to the sun, and these statements are their burning. We, the general public who are not witnesses to any miracle, will naturally look at them as if they’re either absolutely bonkers or malicious evildoers when they’re simply existing in an unfamiliar frame of reference. If we accept this position, then AI people are not dishonest but the complete opposite. And our comments, remarks, disbeliefs, complaints, and distrust are, to them, a clear symptom that we don’t know enough because we’re not looking at the sun.
This explanation doesn’t require much on my part, just like there’s no point for Clark to try: it’s extremely hard to accept this from secondhand testimony and even harder to have the chance to perceive it as true firsthand. You gotta be inside the AI labs, so it’s restricted to the people who already believe it. An unfortunate state of affairs.
But there’s another interpretation for which being outside rather than inside is an advantage.
III.
The third (3) interpretation is that people like Clark want to be honest and think they’re right, but they actually can’t be.
(Notice that there’s complete exclusivity between the three possibilities: AI people are either correct or not, and if they aren’t, they either know it or not. You can be a good person trying to do good and still fail due to circumstances out of your control.)
I’m referring here to the singular psychology of the people working at AI labs, for the world is a story of people and societies, and these are, in turn, a story of how each psychology relates to all the other psychologies. Within this third alternative, I discern three parts that help us understand how AI people’s psychology differs significantly (this is a qualitative analysis) from that of the average person and why it matters.
Keep reading with a 7-day free trial
Subscribe to The Algorithmic Bridge to keep reading this post and get 7 days of free access to the full post archives.