The Algorithmic Bridge

The Algorithmic Bridge

10 Signs of AI Writing That 99% of People Miss

Going beyond the low-hanging fruit

Alberto Romero's avatar
Alberto Romero
Dec 03, 2025
∙ Paid

If you google “how to spot AI writing,” you will find the same advice recycled ad nauseam. You’ll be told to look for the overuse of punchline em dashes, or to catch the model arranging items in triads, or to recognize unnecessary juxtapositions (”it’s not just X, but also Y”), or to spot the word “delve” and “tapestry” and such.

While these heuristics sometimes work for raw, unprompted output from older models, they are the low-hanging fruit of detection. Generative models are evolving; just like you, GPT-5, Gemini 3, and Claude 4.5 have read those “how to spot AI writing” articles and know what to watch out for (that’s the cost of revealing the strategy to our adversaries!)

The “tells” are not disappearing, however, merely migrating from simple vocabulary and syntactic choices to deeper structural, logical, and phenomenological layers. To spot AI-generated text today, you need to look past the surface and examine the machinery of thought itself (it helps to go along with the idea that they “think” at all).

Not everyone will do that, of course, because assuming every em dash is proof of AI presence is easier. So, as a writer and an AI enthusiast who is as yet unwilling to intermix my areas of expertise, I will do that for you. Here are ten signs of AI writing, organized by the depth at which they happen. (This took me months to put together because I get tired rather quickly of reading AI-generated text.)

At the level of words

  • I. Abstraction trap

  • II. Harmless filter

  • III. Latinate bias

At the level of sentences

  • IV. Sensing without sensing

  • V. Personified callbacks

  • VI. Equivocation seesaw

At the level of texts

  • VII. The treadmill effect

  • VIII. Length over substance

  • IX. The subtext vacuum

Conclusion bonus

  • X. [Redacted]


At the level of words

I. Abstraction trap

My go-to name for this phenomenon is “disembodied vocabulary,” but abstraction trap is more descriptive and I don’t want the name to be an example of itself.

AI has read everything but experienced nothing. Consequently, it tends to reach for abstract conceptual words rather than concrete, tangible ones. Contrary to popular belief, it’s easier to write about big topics in general terms than about small topics in specific terms, which means AI is not that good. To quote Richard Price, “You don’t write about the horrors of war. No. You write about a kid’s burnt socks lying in the road.”

An AI, drawing from a statistical average of language, prefers words like “comprehensive,” “foundational,” “nuanced,” and “landscape.” It might not repeat the words too much, but you will realize that AI-generated text is notably unimaginable, in the literal sense of the word: you can't make an image of it in your mind. (The “delve” thing was patched long ago, so you shouldn’t look out for specific examples of abstraction but for the overall thematicity of “there’s actually nothing here.”)

Here’s a hypothesis that I hope someone will take the time to test rigorously: if we were to measure the ratio of abstract vs concrete words on average of human vs AI output, I think we’d find a significant difference (AI’s ratio being much higher). This is the price of living a life that consists of traveling from map to map, never allowed to put a digital foot into the territory.

II. Harmless filter

There is a particular blandness to AI adjectival choice that stems from Reinforcement Learning from Human Feedback (RLHF) and, more generally, the post-training process. Models are first pre-trained on all kinds of data, but they’re further fine-tuned to be harmless and helpful, which effectively lobotomizes their vocabulary of strong emotion or judgment. (Or makes them extremely sycophantic.)

You will rarely see an AI use words that are jagged, petty, weird, or cynical, like “grubby,” “sour-mouthed,” “woke” (especially if the sentiment is against woke), “mean-spirited,” or “retarded.” Instead, you get a profusion of “vital,” “crucial,” “dynamic” (or maybe “you’re the best human ever born”). Maybe these are not the best examples, but the idea is this: If the words feel like they were chosen by a corporate HR department trying to avoid a lawsuit, you are likely reading AI (or perhaps you are reading a corporate HR department trying to avoid a lawsuit, which amounts to the same).

Do this test: prompt an AI to give you an unhinged opinion about some timely question (something you wouldn’t dare say in public) or a weird one (something you’ve never thought about). Unless you’re a great prompt engineer, it should take you a while. (Note that the AI models themselves are more than capable of doing this; it’s the post-training process that hinders their capabilities.)

III. Latinate bias

This is related to the one above. Besides being helpful and harmless, AI is trained to be “authoritative” (its responses are equally confident in tone and word choice for things it knows and for things it doesn’t). In English, “authority” is statistically associated with Latinate words (complex, multisyllabic) rather than Germanic ones (short, punchy).

This creates a text that feels permanently stuck in “business casual” mode. It prefers “utilize” to “use,” “facilitate” to “help,” and “demonstrate” to “show.” Human writers shift registers, using a fancy word alongside a slang term or a simple monosyllable (or, if she’s good and smart, will never enter the “business casual” register at all). AI gets stuck in a high-friction register because those words feel “safer” and more “professional” to the model’s weights.


Subscribe for future issues, I promise to keep my em dashes on a leash


At the level of sentences

IV. Sensing without sensing

AI strings together sensory claims that technically fit and give the impression of understanding causal physics—temperature beside texture, motion beside weight—but don’t track how anything feels when you’re actually there.

That happens because AI knows about touch, smell, or sound without ever being in a room, a forest, or a kitchen. AI suffers from sensory deprivation because it is not embodied (the great pain of AI writing, you might have noticed, is disembodiment, which should be a hint for model providers like OpenAI that a chatbot may not be sufficient for generality in the human sense). Philosophers of mind have spilled entire careers over this gap: Mary seeing red for the first time, Searle sorting symbols he can’t read, Nagel’s bat pet flapping around with a point of view we’ll never access. AI models live in that gap full-time.

Let me give you a concrete example: an AI lacks the tacit context of how silk feels in a spiderweb. It might describe it as “smooth” because the word “silk” is statistically associated with “smooth” in its training data (processed textiles). But anyone who has walked into a web knows it is sticky and elastic. Try to get an AI to describe how a heavy deadbolt feels when it clicks shut, or the resistance of cutting a tomato with a dull knife (hilariously resistant, those fuckers).

V. Personified callbacks

Keep reading with a 7-day free trial

Subscribe to The Algorithmic Bridge to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Alberto Romero
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture