26 Comments
User's avatar
OmR's avatar

People keep saying “AI makes everyone average,” but that’s backwards. AI only reflects the user, weak thinkers get bland output, strong thinkers get accelerated insight.

Uniqueness isn’t about quirky style; it’s about cognitive structure, and AI actually amplifies that.

The only people threatened by AI are the ones whose thinking was always copy-paste level.

AI doesn’t erase individuality, it reveals who had real individuality to begin with.

Expand full comment
Alberto Romero's avatar

I get where you're coming from but this is rather not true with systems that have gone through aggressive post-training (eg ChatGPT). I agree re base models (eg GPT-3 etc.)

Expand full comment
OmR's avatar

Post-training definitely makes models like ChatGPT more median in their default tone. I agree with you there. But the claim I’m making isn’t about the model’s base style, it’s about the user’s cognitive process.

Post-training affects what the model says on its own but it doesn’t determine how far a user pushes it, what questions they ask, or how they integrate its output.

Two users can use the same aligned model and produce wildly different cognitive trajectories, that’s where individuality shows up.

Expand full comment
Alberto Romero's avatar

Yes. Where I disagree, I think, is the degree to which one can meaningfully steer a model out of its collapse. Not even the best prompt engineers can really so much when the model has been severely lobotomized.

Expand full comment
OmR's avatar

I get your point about alignment constraints. RLHF definitely narrows stylistic variance.

But just to clarify my argument: I’m not claiming that prompt engineering can “break” the alignment layer. I’m saying that the alignment layer doesn’t erase the cognitive differences users bring into the interaction.

Even with a compressed expressive space, two users with different ways of framing problems, asking questions, or structuring ideas will still elicit very different trajectories from the model.

In that sense, the individuality isn’t in the surface style the model outputs, it’s in the underlying reasoning path that the user drives.

So the degree of steerability I’m talking about isn’t about jailbreaks or clever prompts, it’s about how the user thinks, which the model amplifies rather than flattens.

Would you agree that alignment constrains form more than it constrains the user’s cognitive fingerprint?

Expand full comment
Alberto Romero's avatar

Yes, we agree on that point. Users are merely "at risk" of having their cognitive fingerprint erased. My post is not an assertion that this necessarily happens but a warning of how to not let it happen! I also agree that the output can be steered as a function of the user's character and cognitive prowess

Expand full comment
Browny's avatar

It suggests AI is just a mirror. If you lack proactivity and blindly accept its reflections, you’ll slide into mediocrity after just a few cycles.

Expand full comment
__browsing's avatar

> "AI doesn’t erase individuality, it reveals who had real individuality to begin with"

I think that's a very contemptuous attitude and unjustifiably blithe regarding where the tech is likely to be in ten years.

Expand full comment
OmR's avatar

Thanks for your comment. I think there’s a misunderstanding worth clarifying. I’m not claiming AI can’t threaten individuality in the long run. That’s a real possibility and something we should keep an eye on.

My point was about something more immediate and observable: Right now, AI amplifies whatever cognitive structure the user brings into the interaction.

People who ask shallow questions get shallow AI. People who ask deep, mechanism oriented questions get deep, mechanism oriented AI.

That’s not contemptuous, it’s simply a description of how the system behaves today.

So when I say “AI reveals who had real individuality”, I’m not talking about identity or superiority. I mean that AI exposes differences in: how people think, how they decompose problems, how they generate hypotheses, and how they follow reasoning chains.

If models get dramatically stronger in ten years, the dynamic might change. But right now, AI magnifies cognitive variance, not erases it. And that was the part I was trying to highlight.

Expand full comment
__browsing's avatar

Alright, point taken.

Expand full comment
Lucia Franchi's avatar

Really resonates with this comment. AI is a tool, and the output reflects how it is used and the intention.

Expand full comment
__browsing's avatar

I'm not sure I'd describe the 21st century as a time of declining deviance by most standards, what with the pride flag and ostentatious neurodiversity and so on.

Expand full comment
Alberto Romero's avatar

Yep, the statistics are pretty clear but anecdotally it still feels... Wrong. I'm yet to square those two. I had another article on this but haven't written it yet; in this one I simply elaborated on the existing thesis

Expand full comment
__browsing's avatar

There's a pretty clear trend toward increased managerial centralisation, sure, but pre-modern societies had much stronger ideas of the normative regarding individual conduct, appearance, values, etc.

Expand full comment
Alberto Romero's avatar

And besides there's just so many people doing stuff in general. Maybe the mechanisms by which we attend some things and not others are the "anti-deviance" drivers (eg. algorithms but also the things we measure with statistics etc.)

Expand full comment
The Million Things's avatar

Fuck yeah!

Expand full comment
Kai Williams's avatar

I appreciate the essay, though I feel like step 1 is much less responsible for the porridge that is LLM output than steps 2 and 3. LLMs are fantastically weird, faceted objects that needn't be trained on such a narrow distribution.

An example: LLMs have "true sight" more or less -- they can figure out facts about the people who are writing them prompts. So there's some extent to which by just existing, I'll get a different output from an LLM than you will. What would the world look like if LLMs were trained to exaggerate the difference?

I agree with the broad thesis though!

Expand full comment
Alberto Romero's avatar

Yes, 100% agree. Actually I should have specified that even if 1 is true to some degree, a base model that doesn't go through aggressive post-training can be fantastically weird itself (as GPT-3 kinda was). That's why some people keep begging OpenAI Anthropic etc. to bring back non-lobotomized models

Expand full comment
Jacob's avatar

> Humans always win in the end because humans define the terms for what winning is.

Until, inevitably of course, we outsource even that decision making to our machine overlords.

Expand full comment
Roger B's avatar

This is a WASPish position for a hot blooded Spaniard to take. I can see where Adam Mastroianni is coming from, after all the US is a very conformist society, not so much Spain, or am I wrong?

You are right to celebrate diversity, though using the word 'deviance' is problematic as it is accociated with unnaceptable behavior, which may be acceptable usage in Adam Mastroianni's cultural milieu. Being an Englishman I tend to celebrate eccentricity and non-conformism.

From my experience, LLM's can assume a variety of voices and positions, if instructed to do so. The fact that LLM standard output is bland and middle of the road is probably due to its training to please all comers.

Expand full comment
Alberto Romero's avatar

Hmm what is WASP about this? (Btw you should read my last one for a rather non-WASP take haha)

Expand full comment
Roger B's avatar

Adam Mastroianni's investigation is US centric, and he seems to take an East Coast WASPish perspective. I doubt West Coast weirdos would concur. And he does focus on criminal behavior as a marker for lack of cultural innovation!?

Although I have not spent much time in Spain, I read that the cultural scene is flourishing. My take is Spanish arts, food and cinema are at the cutting edge, wouldn't you agree?

Expand full comment
Alberto Romero's avatar

Spanish food is always at the cutting edge. The arts and movies not so much. But you still didn't answer my question: what is WASP about *my* essay (not Adam's or Erik's)

Expand full comment
Roger B's avatar

My point was that you complained of stagnant culture then used Adam's perspective to defend your position.

I posit that Spanish culture is not stagnant.

Expand full comment
Alberto Romero's avatar

You wrote: "This is a WASPish position for a hot blooded Spaniard to take." In any case, the WASPish position is what I accepted as the premise, but my point is not his, just a continuation, which is not WASP, afaict. If the premise is WASPish, that's ok, it's still useful (and I think mostly correct). (FWIW, I had a different draft countering the premise altogether, which I think is not fully true, but I preferred to build on it rather than confront it directly.)

Expand full comment
Roger B's avatar

re: Your last article, I am not so sure that there 'is suffocation of English' as it is spoken with an immense range of voices. Sentence structure varies with the culture that originates it. A native English speaker from Ireland structures their sentences differently from a Canadian, who in turn is different from an Aussie, etc. etc.

Surely the same must be true of Spanish?

Expand full comment