The Algorithmic Bridge

The Algorithmic Bridge

Share this post

The Algorithmic Bridge
The Algorithmic Bridge
New Study: People Dislike AI-Labeled Products

New Study: People Dislike AI-Labeled Products

Would you prefer if this blog were "AI-powered"?

Alberto Romero's avatar
Alberto Romero
Jun 30, 2025
∙ Paid
15

Share this post

The Algorithmic Bridge
The Algorithmic Bridge
New Study: People Dislike AI-Labeled Products
5
Share

Hello friends—it’s Alberto.

I’ve realized a bit too late (three years in!) that I never properly introduce myself, and a friend at Substack told me that I should. I’m a writer who once tried to be an engineer. I care about AI and technology, but also about culture, philosophy, and the complicated business of being human. This newsletter is where I try to bring those worlds together. If I had to say what I’m best at, I’d say: stubborn common sense. Come in!

Today’s post is about a new study that reveals people don’t like it when vendors put AI labels on their products.

GENTLE REMINDER: TWO DAYS LEFT

The current Three-Year Birthday Offer gets you a yearly subscription at 20% off forever, and runs from May 30th to July 1st. Lock in your annual subscription now for $80/year. Starting July 1st, The Algorithmic Bridge will move to $120/year. If you’ve been thinking about upgrading, now is the time.

Get 20% off forever


Researchers from Washington State University and Temple University have published a new study on the effects of adding AI features to products.

They made two groups of 100 participants and measured, using fictional ads, whether people would be more or less willing to buy a product that had an “AI” label or was “AI-powered”, compared to a product that was labeled “cutting-edge tech,” or “new technology” instead (control group).

To those of you who read this blog often, the findings, reported initially by the Wall Street Journal, won't come as a surprise:

. . . members of the group that saw the AI-related wording were less likely to say they would want to try, buy or actively seek out any of the products or services being advertised compared with people in the other group.

This was the case for all products but especially for high-risk products, “such as a car or a medical-diagnostic service.” I find it interesting that the authors of the study thought the AI label would make people more likely to buy the products, not less.

This result is in line with a growing trend of suspicion and mistrust toward generative AI.

Not everyone has read Apple's paper “The illusion of thinking” or MIT’s paper “Your Brain on ChatGPT,” but anyone who's used an AI tool more than once knows how unreliable they are. When it nails the task, it's better than any human, but when it fails, it does so in this weirdly inhuman way. The failures would be otherwise amusing; instead, they become scary.

But even if you don't actively use ChatGPT—apparently, only 34% of US adults have ever tried it, a number that feels quite low—you've surely seen the AI label a lot lately. Every single app now has an AI feature. Sometimes it's genuinely useful. Other times, it's annoyingly unnecessary. But at times, it makes the product actually worse.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Alberto Romero
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share