Perhaps you don’t have any problem where AI can help you.
That’s a trivial sentence, right? Surprising, but surely a possibility — maybe you just don’t see the point of using AI. Yet nowadays you won’t find many people agreeing with it. AI is not only everywhere but it must be everywhere — or else you’re missing out on the opportunity of your life.
One reason for this paradox: narratives and realities travel forward but not necessarily at the same pace. The narrative that AI is the technology of the future that will revolutionize the world has caught up much faster than our ability to use the tools with a real benefit for our lives.
AI, they say, is going to change the world across sectors at all levels, from entry positions to middle managers to CEOs, from medicine to transport to math theory. It’s going to redefine our day-to-day experience just like the internet, smartphones, and social media.
That’s all I hear. I know about this stuff and know there’s truth in there; AI has a lot of untapped potential, especially because research happens fast and there’s substantial funding yet most applications remain underexplored.
But I also know that many of you are desperately lost.
Not because you don’t understand AI (although for people who don’t read TAB that’s the likeliest scenario), but because the discourse that’s coming from those who make the claims doesn’t match your experience.
Where exactly does AI fit in your life?
A class of desperation-driven AI users
You should, by all means, use generative AI if it can help you.
For what, you say? Well, that’s the part you must find out, apparently. Researchers doing the discoveries and companies building the systems are figuring it out on the go, just like you and me. So, where’s the manual?
Most people know about AI because of ChatGPT. That’s more than a year ago but still, most of them haven’t tried it and among those who did, the majority didn’t find a useful task for it. It’s not hard to find trivial uses for language models (e.g. summarization and idea generation). You know what you can do with ChatGPT.
That’s not the problem. Translating toy examples into real-life settings is.
Even in the cases where people still ignore what language models cannot do (we might overestimate AI’s abilities, e.g. because of anthropomorphism), that limitation would only appear after having found a potential application.
It’s not a lack of knowledge about the tool that limits usage but a lack of knowledge about how to apply it in the real world.
Unless you’re a coder, it’s not that easy to have a place in your job workflow or other activities for ChatGPT’s skills, even if you know what it can and can’t do. This extends to the entire offering of AI products, like image and video generators. I use DALL-E for my cover images and that’s all. ChatGPT? I don’t see the point.
AI leaders keep marketing generative AI as more impactful than fire and electricity but people don’t often know what to do with it.
That’s not bad in itself; innovations take time to settle. Logistical friction slows down the integrative process, but it’s weird to hear time and again those who sell the tech exaggerating its virtues to the point of alienating everyone else — or making them feel desperate to understand the part they’re missing to make it work.
Contrary to what media headlines suggest, that’s a very numerous group of people. The number of people who have found a perfect spot for AI to enhance their productivity or entertainment or whatever is tiny (the use cases seeing massive adoption are ridiculously limited and not precisely the best ones, e.g. creating spam farms, cheating on homework, or making non-consensual porn).
A good portion of would-be AI users are desperation-driven users — the ones who can’t enhance their productivity but are trying their hardest to make sense of a discourse that, to them, must sound like nonsense.
Keep reading with a 7-day free trial
Subscribe to The Algorithmic Bridge to keep reading this post and get 7 days of free access to the full post archives.