Why Smart People Can’t Agree on Whether AI Is a Revolution or a Toy
The AI debate is broken because the forum is shared, but not our lives
Hey there, I’m Alberto! 👋 Each week, I publish long-form AI analysis covering technology, culture, philosophy, and business for The Algorithmic Bridge.
Paid subs get Monday news commentary and Friday how-to guides. I also publish occasional, timely posts. If you’d like to become a paid subscriber, here’s a button for that:
This is your weekly paid guide. I had a different one prepared, but this is more important right now (timely). Enjoy. I genuinely enjoyed writing this post.
I’ve been thinking about why the AI debate is so broken—why some people are convinced that AI is the next revolution, whereas others consider it a fun toy at best—despite everyone involved being so damn smart. I refuse to accept the easy way out of assuming the other side is full of idiots. And although there are hidden motivations and whatnot on both sides, that’s not the case for a friend telling a friend that AI is cool. So, yes, dishonesty is not absent but also not rampant.
What follows are my notes on this question (I’ve found that a conversational register works well to convey ideas I haven’t fully polished yet, and you seem to like it). My goal is to give you a “framework” for the next time someone tells you that AI is the greatest thing since we invented fire, or that it’s mostly fake (I won’t make judgment calls; those who read me know that my position is nuanced).
Let me state the thesis here and then work my way through it: both groups of people—enthusiasts and skeptics (or whatever you want to call them, no label feels right)—are telling their truth. That’s the whole story, really, as I see it: we share the forum where we debate our lives (social media, etc.), but the lives being debated are hardly shared.
This gap between ones and others is widening extremely fast, and although I’ve been writing about this since ChatGPT at least, I think it has lately become the most urgent societal matter regarding AI. My job is, literally, to do my best to try to close this gap. Not for nothing does my newsletter contain the word “bridge”: The world is built on bridges, and undone by gaps.
You may think it's impossible to bridge the gap at this point, but I refuse to admit defeat. Like every other conflict in life, this is a matter of goodwill and communication. Bear with me at least for one section. That's the minimum effort I ask from you. I promise you this post is worth two minutes of your time.
Matt Shumer’s viral article on X, “Something Big Is Happening” (which has ~50M views at the time of writing), inspired this one, together with the responses it got, both good and bad. Other inspirations that I want to acknowledge: Derek Thompson here and roon here.
I. IS THIS WHAT YOU CALL A MAGIC WAND?
Imagine a friend comes to you with what he says is a magic wand. He’s very excited. He tells you that a flick of the wand with the correct spell can finish your work tasks, organize your life, and predict the future.
Indeed, what one imagines a wand to be able to do. You’re skeptical, but you’re polite, so you try it. You wave the wand and whisper something about your job—something you actually need help with—and immediately a new document appears in your computer: it's magic! It is also, as you realize once you start reading it, quite mediocre. Not terrible, but just… not good enough. It feels like written by a college student who skimmed the Wikipedia page for your profession. It is a magical wand indeed, but a defective one at that.
You hand the wand back to your friend and tell him, “Hey, this was cool, but it’s not nearly good enough. Fun toy, though.” Your friend says, nervously: “Wait, no, no, you’re doing it wrong. It truly works. You have to believe me!” You look at him, a bit more skeptical but still polite. Then he adds, “Actually, maybe you need a better one to compensate for your lack of skill. It will be $20/month.”
Not many friendships would survive such an interaction, right? Surely you won’t take the chance to go buy a chatbot subscription. No: after a bad experience, the spell is broken. To you, the magical wand is a standard stick, perhaps more polished than usual but a stick nevertheless. And you will treat it this way going forward. Now, lovely reader, you know why the story ended like this. Whether you personally think AI is great or not, is of no matter here: even an enthusiastic guy would react this way if the wand failed in their face!
At this point, you are not being unreasonable or skeptical. You are being a normal person, having a normal reaction to the situation. This is the typical experience of people who have tried AI and concluded it’s overhyped. (We should not make a norm out of the exception: demagogues, killjoys, stubborn personalities, or saboteurs are a rare kind of skeptic. The standard kind is the unconvinced friend.) The fact that your friend insists on it only makes it worse: why is he insisting that this stick is a wand? Is this some kind of weird prank? Is he selling me something? Is he on drugs?
Now imagine that half of the world—mass media, industry leaders, tech workers, etc.—insist not only that the wand is magical but that it improves by the day. Out of pressure or curiosity, you try again, in the privacy of your life, away from the claims and shouting of the forum. And the wand fails again. “This is a complot, a conspiracy!” You think. And you only grow more skeptical of it all—naturally! Over time, the conclusion becomes obvious: the discrepancy between very smart people who think AI is amazing and those who think it's mostly fake is a consequence of broken expectations: only the experience of the power users—those who use AI 10x or even 100x more than the rest—matches the extreme hype everyone witnessed.
I want to be very clear about this because almost every person who is enthusiastic about AI skips this part: if you tried AI and it did nothing for you, your experience and perception would become equally skeptical and equally valid! Skeptics are not a different species, but the same species under a different life experience.
So, basically, the AI debate right now is two groups of honest people calling each other unfair names. The enthusiasts think the skeptics are too proud or too lazy to learn or outright dumb. The skeptics think the enthusiasts are gullible or selling something or outright dishonest. Both generalize from their own experience and wrongly assume the other must be having the same experience. But they’re having opposite experiences and the reasons why are mostly invisible. So let's take a look under the sheet.
II. DIFFERENT LIVES IN A SHARED FORUM
Let me go over them quickly.
Your job matters. AI is very good at some kinds of work and mediocre at others, and which kinds do not quite relate to which kinds of work humans are good at. A lawyer who uses AI to draft briefs and then applies twenty years of judgment is getting enormous value. A lawyer who tests it on the subtlest thing will merely confirm that it’s “over-hyped.” The experience you have depends on which tasks of your job you point the tool at. I've recently written about why it's good to reconsider moving up the ladder of abstraction: focus less on “how” skills and more on “what” skills.
Your disposition matters. Some people hear “AI can do X” and think, “let me try and see.” Others hear that and think, “prove it.” Some people experience “X approach didn’t work” and think, “let me try Y approach.” Others think, “X didn’t work, so Y won’t work either.” Both are reasonable responses. But the first group accumulates experience that makes the tool better for them (because they learn its patterns, its strengths, where to push and where to back off), while the second group runs a single test and walks away discouraged, feeling lied to. Over time, these two groups diverge enormously. (This is a matter of character, e.g., open-mindedness, but also of pure preference: a thinker vs tinkerer kind of thing.)
Your background matters. If you’ve spent time around software, you have an intuition for what these tools can do even when they fail on the first try; stochastic doesn't sound like an insult to you. You know to rephrase, to decompose the problem, to iterate, to prompt and re-prompt. A battle with the machine is fuel for you rather than pure frustration. If you haven’t—which is most people on Earth—the first failure may as well be the last attempt. The tool’s learning curve is itself a problem to be approached openly, but AI companies insist magic is just at the other side of the flick of a wand.
Your geography matters. If you’re in San Francisco, AI is ambient. Everyone uses it—even the janitor and the bus driver—everyone talks about it at parties and at the office and at the beach. If you don’t live in the Bay Area and you don’t play this game, you risk being inadvertently marginalized: social pressure redefines your perception of technology! If you’re in most other places on Earth—like Madrid, my city—AI is a thing you read headlines about, and perhaps the thing you use that your friends have not even heard about: and being first is much harder than being last. The density of your environment determines how much surface area you have with the tool, and surface area determines experience, and experience determines both belief and skill.
Your identity matters. My journey started like this: “AI will not be able to write well because…” and a bunch of chauvinist-sounding arguments about why writing in particular—my trade of choice—is uniquely human. I’ve matured into this: “I like writing, I don’t care about AI’s writing skills either way.” Some people are still playing this identity game with AI: “This one thing that I like is protected from AI!” A better approach is to disengage from this risky attitude (what will you do when time proves you wrong!). Instead, learn to be indifferent about whether AI can or can’t do stuff. Painting and writing are beautiful activities that nurture the soul either way! Don’t bestow AI with the kind of power over you—over your interests and sensibilities—that it doesn’t have.
I could keep going, but the point is already clear: the experience you have with AI is not determined by AI. Not to get too ontological in a “conversational” post, but the thing is: nothing is in isolation. AI is not good or bad in itself, but fully contingent on who you are. It’s determined by: what you do, where you are, how you’re wired, what you’ve tried, when you tried it… And in practice, none of this is visible to the person on the other side of the forum. To a lot of people, AI is, indeed, nothing more than a fancy stick pretending to be a wand.
(Importantly, this makes AI quite different from most other technologies. Because other tech is less a function of who you are, and more a robust function of itself exclusively. The variability is there at times—e.g., internet can be obscure blogs about medieval trade or TikTok—but the basics are mostly shared: a car moves fast, a calculator does math, a stove cooks food, a TV shows you shows.)
This is why Derek Thompson’s question directed to skeptics—”What would change your mind about AI?”—doesn’t really have an answer other than: you don’t change anyone’s mind because the mind is a product of life, not the other way around. Thompson’s question assumes the disagreement is about empirical evidence or theoretical reasoning, but it’s about personal, anecdotal experience. On the public forum, you can only give them arguments, which they’ll evaluate through the lens of the private experience they already have, which will confirm what they already believe and simply push you further away from their circles of influence, which, over time, will become an echo chamber.
So, if not that, then what does work?
There is something. Incredibly obvious once you know it but for some reason, we insist on doing other stuff first. And most importantly: everyone can do it right now for free.





