How the Great AI Flood Could Kill the Internet
The web has become a place of passive consumption that AI hustlers won't hesitate to exploit
Like everything else, humans are subject to the second law of thermodynamics.
Ice melts, wood burns, glass shatters. Entropy grows and we age. But it's the alternative version of the law that suits us best: The principle of minimum energy. It says, simplifying, that a system’s energy is lowest at the equilibrium. That is, if we can lie down instead of standing up, we will. From that principle, an even more relatable one can be derived: The principle of least action. Whatever we do, we take the route of minimum effort. We like to be efficient.
All that is just the elegant, scientific way to describe a universal reality that’s alien to no one and which, translated to the mundanity of daily life, can be captured in three words: We are lazy.
This is a story of human laziness and how it could singlehandedly destroy the already fragile online information ecosystem.
AI-generated books on bestselling lists
I woke up yesterday to sad, yet unsurprising news: AI-generated books are climbing the ranks on Amazon’s bestselling lists.
“The AI bots have broken Amazon,” Caitlyn Lynch, indie author, tweeted on Monday. She noticed it first: “This will absolutely be the death knell for [Kindle Unlimited] if Amazon cannot kill this off.” On Wednesday Jules Roscoe covered the news in depth for Vice, stating that Amazon has since removed the aforementioned books from the lists. However, as she notes, “the episode shows that people are spamming AI-generated nonsense to the platform and are finding a way to monetize it.”
Well, it’s not that people are buying the books, of course; the quality is horrible. ChatGPT (or possibly Sudowrite) can be steered to output acceptable prose, but the people expecting a return out of the grift are not willing to put in the time and effort it takes to get an AI chatbot to produce golden sentences. They’re too lazy.
Click farms complete the job. They don’t need people to read their “AI nonsense” if bots can hack the rankings, getting them the buck they seek and, at the same time, forcing honest, human-made writing to find luck elsewhere, overwhelmed by the unfair competition and the payment reduction.
The perfect strategy—if you don’t have any sense of dignity or appreciation for other people’s work.
From writing to the internet
AI-generated books are the latest stage of a millennia-long relationship between humans and writing. We have to go back (for a few paragraphs) to understand how we got here and why it matters.
After the invention of writing around 3400 B.C., and for thousands of years afterward, written information was exclusively created, shared, and consumed by the high class. The vast majority of people didn’t matter. They were out of the cycle.
The printing press revolutionized this dynamic in the 15th century. Written information was still created by specialists and shared by powerful and influential elites but, for the first time in history, the plebs were part of the audience. Journalism was born in the 16-17th centuries and newspapers and magazines became the main channel of news (until the radio and TV came about in the 20th century). By then, written information was available to virtually everyone (in the developed world).
In the 21st century, the internet emerged as a global phenomenon, slowly but increasingly overshadowing all the other written channels. Web 1.0, as it was called, began as a place to read stuff. It democratized the consumption of written information and, more importantly, its shareability, which had been previously controlled by media owners.
But it was the second iteration, Web 2.0, which brought in the true revolution we’re experiencing today: Besides reading, people could also write freely and their words could, potentially, be read by the growing millions with access to the web. The internet became a place of participation and contribution, not merely consumption and passive sharing.
For the first time in history, creation, dissemination, and consumption were for everyone (with internet access, of course). The dream for readers and would-be writers all around the world.
The costly mistake of the algorithmic feed
But there was always a limitation; a bottleneck that not even the brilliancy of the internet could overcome: we create much faster than we consume.
The printing press introduced the problem. But since Web 2.0—and the appearance of the creator class—it was no longer a problem but an impossibility. Every year, more and more people write online. Yet, every year, our ability to consume written content remains constant. This unavoidable fact created an unbalanced dynamic: creators competed to attract and keep an audience. The birth of the infamous attention economy.
During the early days of Web 2.0, the solution was straightforward. High-quality content was shared more and thus, over time, consumed more. Early adopters surfed the net as explorers of uncharted territory and rapidly popularized every hidden online jewel they found. Pioneer forums, blogs, and websites that thrived did so rightfully. That was the golden era of the old internet.
But the emergence of AI-enhanced powerful search engines and recommender systems controlled by a few big players broke the game.
Social media were the first sites to exploit this technology. Clever algorithm-powered feeds could magically sort and classify and categorize information. The goal was, presumably, to customize our content diet. They eventually outmatched our ability to do so while saving time and without saturating our shortened attention spans.
We slowly delegated to the feed the daunting process of finding value online in a growing sea of garbage. And we paid the price.
The internet, once a purely active tool for creating, sharing, and consuming information and entertainment managed to make us passive consumers through sheer success. We now scroll and scroll YouTube, Amazon, Netflix, TikTok, Reddit, and Facebook expecting them to give us our daily dose of content.
We have, once again, allowed our laziness to get the best of us.
The Great AI Flood has begun
As you can tell, the online information ecosystem has never been particularly healthy. SEO content farms, clickbait factories, and misinformation have a place and always will. AI-generated content merely adds a layer on top of the pile of crap. Why should we devote our efforts to fighting it, then?
In recent months (years) AI language models, chatbots, and writing tools (all partially overlapping concepts) have become cheap, fast, accessible, scalable, and of reasonable quality. People are using them to the extent of their ability because they’re lazy. Writing, if you love it, is a pleasure. If you don’t, it’s hard, slow, and tiring. Language models solve that.
Although good-enough quality isn’t better than very high quality, our laziness has made us passive consumers, so, for the most part, we no longer choose what we read online; it’s chosen for us—we’re going to eat the bullshit.
And you may think, “Well, those algorithmic feeds are designed to filter out garbage that we may not like; good content will float and bad content will sink.”
That’s the intended purpose (if we ignore that algorithms are actually optimized for engagement, but that's a story for another day), but they're not perfect. Even if they mostly achieve their mission, AI tools can generate cheap content so fast that it’s a matter of time before it saturates the algorithms, the funnels, and the feeds, to the point that they fail and a tiny fraction gets through. That may not sound that bad, but a tiny part of a lot is still a lot.
That’s what the Great AI Flood is. The internet, once a vibrant agora of high-quality underground sites democratized and decentralized for creators and consumers, intended to satisfy the needs of those who wanted to escape the mainstream channels, will be deformed beyond recognition into a desert populated by AI bots.
What is it that we may lose?
You may argue that there's already tons of low-quality content on the web. Right, but just imagine what will happen if the creators of that low-quality content multiply their output by 10x, 100x, or even 1000x.
You may consider AI-generated content to be too low quality—unconsumable. I do; those Amazon books are so bad that I can’t believe anyone would buy them. But I wouldn’t bet against AI. There are strong incentives to improve the models further to the point that those who have to—but hate to—write can fully outsource it.
Imagine what will happen once the tools are capable of easily churning out good prose. Imagine what will happen once those capabilities are integrated into every single writing service available on computers and phones (already happening). If an engaging essay about the origins of bioinspired technology or the aftermath of WWII is one click away, what would people choose to do?
And you may believe that most people will use AI with good intentions; to create, edit, and improve their craft, not as a pure substitute. Yet, it doesn’t seem to be the case. Here’s François Chollet's hypothesis for why ChatGPT usage has gone down when the summer break started:
“LLMs are touted as a shortcut to economically valuable deliverables—but the market ends up using them for value-less or negative-value deliverables such as homework and content farm filler. Not exclusively, but in very large part.
What both homework & content farm filler have in common is that they critically need to sound like they were made by humans. That's the entire point—the content itself is worthless. And that's the fundamental value prop of LLMs: the appearance of human communication.”
Most people who publish AI-generated content, by definition, care little about intent, creativity, honing their craft, polishing their thinking, communicating their ideas, or just high quality overall. They care about getting it out of the way as easily and fast as possible. They care about growing a dishonest side hustle.
Once the Great AI Flood arrives, it will be very hard to return the internet to its former glory. I risk sounding like a nostalgic old guy despite I’m 30, but I do believe it. We may be about to lose one of the finest treasures of the digital era.
We’re lazy and that worked for us so far—it may not anymore.
Very convincing. Perhaps I'd remove the part about the second law of thermodynamics, because it's not applicable to open systems (as we humans are): a given open system (an animal, a bacteria, a human) can be entropy-negative at the expense of the environment. But this is a technical detail (I can't get rid of my nerdy nature) and doesn't undermine the article's point.
Alberto , what is your position on Marc Andreessen’s excellent progressive article “why AI will save the world” that is pulling the hood off headlines like this one?