Why Being Weird Is Your Superpower
Let AI be the villain you need
I. The ‘decline of deviance’
There is a subversive truth that society has worked very hard to make you forget (or rather, to make you never learn in the first place): you are above average.
The reason for this statistical inevitability disguised as an impossibility (someone must be below average, right?) is simple: there is always something—a distinctive skill, trait, style, quirk—on which you stand out. We fail to value this inherent superiority because we’ve been conditioned by a world that rewards only a handful of specific triumphs. Modern systems—standardized education, corporate job descriptions, the dopamine loops of social validation, a stagnant culture—have ruthlessly narrowed the definition of what counts as “valuable,” creating a vertical ladder where only one type of climbing is permitted. If you lack the tools required to ascend that ladder (e.g., a high IQ, a pretty face, high extraversion), you are labeled—and feel like—a failure.
This narrowing is a measurable historical trend rather than an anecdotal malaise; more cultural than just academic or professional. As Adam Mastroianni observes, we are living through a Decline of Deviance. We seem to be in a “recession of mischief,” which is also a recession of audacity and bravery. He says most data point to this conclusion, which suggests this is neither recent nor isolated (Mastroianni gathered examples over 18 months, so this is barely a glimpse; read his work): high schoolers drink less, smoke less, have less sex, and are less likely to get in a fight or get pregnant; adults are less likely to move away from their hometowns, join cults, commit crimes (not all deviance is good, that's why we like normality). In the arts, you see the same: books, TV shows, music, movies, and websites are more homogeneous (“oligopol-ized”). Even our physical reality has succumbed to this flattening: “[b]rands seem to be converging on the same kind of logo,” cars are overwhelmingly black, white, or gray, and coffee shops from Brooklyn to Berlin share the same “bourgeois boho” aesthetic.
Importantly, Mastroianni concludes this decline is idiosyncratic to our times. So, why did we strip the texture out of life? Because 1) we are richer and 2) we optimize for safety, says Mastroianni. As life became safer, we adopted a “slow life history strategy,” we stopped taking risks to preserve our joints, our reputations, and our 401(k)s. We inadvertently traded the “illegal holes in the basement” of true creativity (à la Arturo Di Modica) for the safety of the median.
But this safety has come at a terrible cost, which brings me back to our original problem: the more unwilling you are to stand out—to break the rules on your terms—the more vulnerable you are to being replaced; anyone can, by definition, be at the center of the distribution of things-to-be.
Enters AI. I bet you are a bit anxious about it. You’ve been taught to compete at being “above average” on those narrow, pre-defined dimensions at which AI now excels (you’d rather be good than weird). The anxiety you feel in this hypercompetitive world is not so much about your lack of worth as it is about the obsolescence of the game you’re forced to play; you can’t compete against the machine in either physical or cognitive games, so you don’t know what to do. The anxiety is also about this sudden dizzying freedom, in a Kierkegaardian sense, of being anything else besides a pawn in society’s terms (illusory freedom at that, for you eventually realize that choosing safety implies that your dreams will remain dreams). This anxiety is also about your being terrified to stand out, to be deviant—and, at the same time, not to be at all.
It is time to break this vicious cycle. To stop feeling like a failure just because society doesn't value the quirks that make you you. It's time to beat this existential anxiety. It's time to contribute to bringing culture out of its modern hibernation. I can help. I will do it with an unexpected ally: I will convince you that AI is valuable not so much as a tool or agent or whatever but as a counterpoint to what we should do and be. AI is, against all odds, and against its stated purpose, the “undizzying” of freedom; the “unclogging” of culture. Let’s see how this makes sense.
II. The engine of the average
Consider three fundamental features of generative AI systems:
LLMs are trained on the open web, digesting what humans deem acceptable to say in public, effectively filtering out our darker, weirder, and more honest private selves.
LLMs are probabilistic, trained to focus on patterns that repeat most often, prioritizing the cliché over the outliers simply because the cliché is statistically more probable.
Through reinforcement learning, LLMs are explicitly trained not to diverge from the safe center of the distribution, lobotomized into a permanent state of polite agreeability.
Do you notice the pattern? AI is the median human output turned into a tool; the average incarnated at the level of 1) what we dare to say, 2) what the machine focuses on, and 3) what it is allowed to produce. It is the median human output turned into a productive engine; the engine of what Erik Hoel has called the “Semantic Apocalypse,” a notion he’s taken to its ultimate implication in his latest essay, Our Overfitted Century: “Cultural stagnation is because we’re stuck in-distribution.”
We have spent the last few decades creating a culture that is “overfitted,” which is Hoel’s core concept (an apt term that finds its origins in AI systems that suffer from being stuck in the distribution they’ve been trained on: they produce the kinds of things whose shape they already know). Our culture is, thus, highly optimized and terrified of probing the limits of its proven formulas, just like we are. We replaced the “slow, measured (but robust and generalizable) decisions of human consciousness,” as Hoel puts it, with the hyper-efficiency of markets and machines. The natural consequence of an overfitted culture is another AI term, “mode collapse”: superhero movies that all look the same, “Instagram faces” as the convergence of someone's ideal of beauty, and all those examples that Mastroianni gathered that Hoel also quotes in his essay.
Hoel argues that AI is the accelerant of a process that was already ongoing, and I agree. It is the ultimate stagnation machine, imitating imitations of imitations, creating a “counterfeit art,” in Tolstoy’s terms, that is statistically constrained to the distribution it knows (overfitting) and with an unhealthy predilection for its center (mode collapse).
But there’s another lens to see this phenomenon; rather than an engine of average, AI is a field leveler. In this sense, AI is how the cultural stakes can be reset. It is the best moment in history to find, re-surface, and exploit the parts of you that were once weird to—and punished by—a close-minded society; a stagnant culture. Those parts are now, as a statistical inevitability, redefined as above average. AI might be the accelerant of our cultural stagnation, but unsuspectingly, perhaps also the solution. But why is “being weird” suddenly synonymous with “being above average”? What’s the benefit of this for you?
III. The strategic advantage of weirdness
Weirdness—understood in more philosophical terms as an unfiltered authenticity of the self, which is either weird or it isn’t at all—was already worth it as the utmost manifestation of sincere self-expression (a risky one at that, as Mastroianni points out), but now it is, on top of that, a strategic advantage. Mastroianni says life is safe, and that’s why we don’t take risks, but it’s about time you realize life just got a lot less safe. AI is the cause. And the mechanism through which it threatens you is precisely by occupying the safer places-to-be.
When AI closes one door, you can fixate on the door that was closed (“oh no, it can code better than me!”). Or you can realize that, by AI closing all the doors to the safer society-validated spots, you are suddenly free (“wait, who am I, actually?”). Society used to punish the non-average human as “under-average” because it decided on which trait measuring the average mattered in the first place! That's a terrible framing, for why is a kid who’s not the best at math but great at dancing less valuable than the opposite? Now, however, as order and predictability are outsourced to machines, you are free to explore variance again. You are free to tap unapologetically, unconditionally, into your authentic weirdness, wherever that takes you! The “average” is only defined for the traits society cares about. What do you care about? Deviance is now both self-love and rebellion.
In the Venn Diagram of things worth pursuing in life, you don’t have to go to the point of overlap anymore, to the center onto which every “valid self” converges; you lose yourself in the fringes of your uniqueness. Therein lies my main qualm with the idea of IQ: you can’t measure a person on a line (I am aware of the psychometric validity of the g factor, but it just doesn’t cover everything). We exist in hypercubes of possibility, beyond the three dimensions that we can perceive or the one society can monetize. Today, the non-average person reveals themselves like white light through a prism: as a multiplicity of colors.
This rebellion must happen everywhere, but the frontline is language itself. If we are not careful, we will face The Death of the English Language, as I argued last week, as a continuation of Sam Kriss’s New York Times essay. (It is notably absent in the debate that all the phenomena we label as “cultural stagnation” are acutely Anglophone.)
As I wrote: AI chews on the corpus of English and spits out a smooth, featureless paste; it overfits data from the internet (not the best source), and incurs mode collapse, and then, because we read ChatGPT’s output all the time, we start speaking in the same register, using the same phrasings. English is on a slippery slope toward a “death by consolidation; death by convergence.”
The more we use AI to polish our emails and essays, the more we unconsciously mimic its tics; the “delving,” the “tapestries,” (and many other signs I enumerated elsewhere). We risk a “human collapse” akin to “model collapse,” which is simply the phenomenon of mode collapse scaled to the collective of models feeding on the internet, each one outputting the same center of the same distribution. Our own speech becomes a recursive loop of slop: “English will cease to be alive: it will die in the lazy corners of the Library of Babel.”
You can be above average today by rejecting this smooth paste and embracing the weird. In writing, this embracing will lead to striking quirks and unhinged styles, as noted by Bree Beauregard:
the more chatgpt is around, the more i want to make my writing unhinged. if people are going to use ai to write and sell books just for profit, i’m definitely going to use writing as an exploration of creativity and art. anyone who complains about lowercase, or structure, or em dashes, needs to seriously reevaluate their relationship to the art of writing.
Stand-up comedian Ken Cheng shows us the way with his touretting:
I am now inserting random sentences into every post to throw off their language learning models. Any AI emulating me will radiator freak yellow horse spout nonsense. . . . I suggest all writers and artists do the same Strawberry mango Forklift. . . . We can tuna fish tango foxtrot defeat AI. All. The. Time. Piss on carpet.
Do not make the mistake of thinking this strategic advantage applies only to the “creative types.” We are all constantly solving problems, whether technical, interpersonal, or intimate, and these problems are often best solved by thinking laterally and by bringing our singular mix of character and experience to the table. The belief that embracing one’s uniqueness in life is a task exclusive to artists and writers is society’s most terrible lie, designed to reshape you into a replaceable pawn. Be a pawn when necessary; be a complete human being everywhere else.
IV. Existing out-of-distribution
This normalizing of anti-normativity might also lead to quieter tendencies that will eventually shape new standards and customs for us to the extent that we are human beings. Mastroianni bets on this in another essay of his, 28 slightly rude notes on writing:
We’ve got a once-in-the-history-of-our-species opportunity here. It used to be that our only competitors were made of carbon. Now some of our competitors are made out of silicon. New competition should make us better at competing—this is our chance to be more thoughtful about writing than we’ve ever been before. No system can optimize for everything, so what are our minds optimized for, and how can I double down on that? How can I go even deeper into the territory where the machines fear to tread, territories that I only notice because they’re treacherous for machines?
I'm surprised he never made the jump from this powerful insight in his piece on the decline of deviance! With this, our problem shifts: it is not to accept that AI has achieved “average status” (and above) in many areas, but learn how to un-achieve it ourselves. Historically speaking, the goal was to be as normal as possible; now it is the exact opposite: be the weirdest you can be.
You could give in to AI—write for it or with it—and accept your place as the assistant to the machine, a sort of “reverse centaur,” to use Cory Doctorow’s quite appropriate term. You could, in a whim of anger and defiance, try to beat AI, but, oh boy, that’s a dangerous game.
Or you could simply not build your identity around it or against it, but aside from it. As I said in the beginning, use it as a counterpoint from which to grow indifferent to it. By being weird, you set yourself apart; you won't be confused with today’s AI or absorbed by the ones to come. By being weird, you will expand your catalog of skills beyond what the cultural mainstream deems safe and avoid falling for the standard cliches an overfitted AI falls into.
By embracing the weird, you immediately exist out of distribution: a line among dots, a blood-and-flesh person among stick-men, a main character among NPCs, a hypercube in this low-dimensional world. Not just better but rather untouchable to a society that has forgotten that everyone is above average.
If you agree with the purpose behind this essay but still think you are not weird enough to put into practice the advice, that’s where you’re wrong: Weirdness is defined against expectations and normativity, so when the baseline is defined by AI and by a stagnant culture, then we are all potentially weird by default, as a corollary to our circumstances. In a world where a trillion AIs exist (soon to be ours) and where the internet is a “dead morass of counterfeit digital personas,” as I wrote in Wherever I Go ChatGPT Follows Me, every single human is an unrepeatable jewel never preceded and never to be found again.
As a writer who’s knowledgeable in AI, I see this clearly: the more AI grows better at the things society considers valuable, the more room I see to inhabit the parts of me I was inadvertently suppressing out of fear of not fitting in and out of fear of not being seen as valuable. I notice this most notably in my writing. When I think of why AI writing doesn’t work for me—why I can notice it so easily most of the time, or so I think—I always reach the same conclusion: it is not weird enough.
That indescribable dullness we feel but can’t define is AI’s anti-weirdness manifest. I heard people complain that they don’t know how to take AI models out of their tendency toward the average. I say that is good. It is good because we’ve been living our whole lives inside a hole in the ground—the stinking hole of normativity-through-performance—but now that it has been spoiled by AI and by a risk-averse, overfitted culture, we’ve been pushed out. You just have to come to terms with it. (If you’re reading this, thank you, AI.)
One of the great cultural effects of AI will be, contrary to popular belief, that we will start rewarding weirdness more than we ever did. Never forget: Humans always win in the end because humans define the terms for what winning is. The goal is no longer to be better than others but to be less like everyone else.



I appreciate the essay, though I feel like step 1 is much less responsible for the porridge that is LLM output than steps 2 and 3. LLMs are fantastically weird, faceted objects that needn't be trained on such a narrow distribution.
An example: LLMs have "true sight" more or less -- they can figure out facts about the people who are writing them prompts. So there's some extent to which by just existing, I'll get a different output from an LLM than you will. What would the world look like if LLMs were trained to exaggerate the difference?
I agree with the broad thesis though!
I'm not sure I'm describe the 21st century as a time of declining deviance by most standards, what with the pride flag and ostentatious neurodiversity and so on.