Twitter, Elon Musk, and the Information Crisis
We're more vulnerable than ever to AI-powered misinformation on Twitter.
I’ve written about AI-powered misinformation here, here, and here. About the dangers of social media and recommender systems here and here. And about Elon Musk and his endeavors here and here. Today I bring you a mix of the three.
I’m super tired of the bird app these days. You’re too, probably. But what’s happening right now—with Twitter and Musk as epicenters—could send shockwaves through the fabric of society in the form of a new meaning of the words “reliable information.”
There’s nothing novel about our inability to assess with certainty the sources of our truths—misinformation has existed since the origins of time. But even if humans don’t need fancy technologies to spread lies, AI and machine learning systems enhance the capabilities of malicious actors to unprecedented levels—that’s why it’s important to draw a clear picture of what’s going on.
We’re living through a Twitter news rollercoaster, so I’ll approach this article with more caution than usual. I don’t really have a strong opinion (I don’t have a clue how to run a social media company), but I have this weird sensation that we’re going downhill. I’ll share with you the events I consider relevant to help us understand what Twitter—as a key source of information—will become post-Musk.
And I’ll do so with my AI lens on. Because, whether we like it or not, AI plays a crucial role here.
Can Twitter be the town square of trustful information?
Despite being a “minor” social media site in terms of users, growth, and revenue, Twitter is one of the main sources of news and information US people rely on. So much so that it’s become a common practice for journalists to use Tweets as sources, giving even more power to the platform. Potentially, Twitter could be much better than TV channels or magazine news outlets that give us zero control over the realities they share.
But now Musk has bought it and will soon take our “de facto public town square” private—to make his dream true and possibly turn ours into a nightmare. He has spent this week twisting the guts of his new toy in a way that is raising questions, suspicion, and innumerable conflicts with users.
No one knows the true motives that led Musk to get into this hellscape, but one of his stated goals is that he wants to verify all human users and get rid of AI-powered “spam bots or die trying.”
Just yesterday he tweeted that their mission is to make Twitter the “most accurate source of information about the world.”
If true, this is a respectable purpose. The key issue here, as Gary Marcus described, is that AI systems (language models in particular) are very good at generating misinformation but notably bad at detecting it. If Musk wants to combat mis- and disinformation, he will need advanced tools (or a brute force approach).
Generating misinformation is easier, cheaper, and faster than ever. Anyone can exploit AI systems: from conventional deepfakes to language models like GPT-3 and text-to-image diffusion models like Stable Diffusion. These AIs can’t tell right from wrong and often make up realities that have no grounding in the real world. At the same time, because these systems evolve so fast, we’re lagging behind in detection and recognition countermeasures.
And I’d add here another problem on top of this. Recommender systems, which literally decide what we see on Twitter (or any other social media app), are also driven by opaque AI algorithms. This wouldn’t be much of a problem if it wasn’t for Musk’s recent decisions: first, he decided to remove identity verification, and then he fired the team in charge of algorithmic responsibility.
Once this frenzy maelstrom of news, changes, and one-sided decisions settle down into something stable we’ll find out if Twitter can survive. I don’t know if Musk’s intentions are honest or whether he’ll achieve his alleged goals but, certainly, his late actions suggest otherwise.
An $8/month wall to stop AI bots
One of the most criticized changes Musk will implement is an $8/month fee to become verified, without any other requirement—a complete redefinition of Twitter Blue.
The old Blue had two functions: it originated as a way to verify a user’s identity and eventually became a symbol of status (at least seen as such for people without it).
It was a preemptive way to stop malicious actors from using AI-powered bots to effectively create cheap, scalable, and successful misinformation campaigns, or simply pretend to be real people. It wasn’t a perfect system by any means: Twitter is flooded with bots of all kinds—that tweet a lot—but they couldn’t impersonate verified public figures. It was a reasonable quality check to protect the main sources of public information (politicians, journalists, scientists, etc.).
All that is gone now. Musk, in his presumed attempt to make Twitter the informative town square of our times, decided to kill two (or three) birds with one stone: The new Blue would remove the old “lords & peasants” system—by allowing every human to get verified—, would prevent bad bots from spamming our feeds with a financial wall, and all that while ensuring a recurring revenue stream.
Despite the superficial good looks, the change has important drawbacks—more nuanced than most people think.
Over the past week, people have, not without reason, criticized Musk’s decision to establish a functional paywall for content sharing. On the one hand, $8/month can be significant spending for many people. On the other hand, those who refuse to adhere to this fee will see their content deprioritized.
This multifaceted measure has a primary goal: a bot-powered misinformation campaign at scale most likely can’t afford to pay $8/month per user. Yoel Roth, head of Safety and Integrity at Twitter says “spam-fighting is a numbers game.” Thus, this type of bot-generated content would be buried under verified-sourced content.
The new Blue is theoretically an effective barrier to large-scale misinformation operations. (Not going to chime in on Musk’s intention to generate more revenue as a by-product, but it may not turn out as well as he thinks.)
I see a couple of problems here. First, not every human user will apply for Blue (the motivations range from not having $8/month to spare to wanting to bring down Musk’s plans—all legitimate). Second, bad bots can still apply because identity verification is no longer a requirement.
Second-order issues derive from those two. Even if impersonation is prohibited per the Twitter Rules, there’s no longer a hard wall (old identity verification process) to prevent individual bots—and also humans, as we all have seen this week—from impersonating people whose opinions can have a huge weight on public opinion (quite important in times of midterms).
Bots will still exist (maybe not at scale) and malicious actors can easily apply for verification (even if impersonation is illegal).
This leads me to the next point: how can Twitter ensure misinformation is detected? Who is in charge of tailoring the recommendation algorithm and ensuring the company remains responsible and accountable?
Why Musk can’t be the face against misinformation
Just a week ago, Twitter’s ML, Ethics, Transparency, and Accountability (META) team was in charge of keeping the algorithm under control. Not anymore.
The team started as an initiative to hold the company accountable for algorithmic decisions, ensure outcome fairness and transparency, and enable algorithmic choice.
“We’re … building explainable ML solutions so you can better understand our algorithms, what informs them, and how they impact what you see on Twitter. Similarly, algorithmic choice will allow people to have more input and control in shaping what they want Twitter to be for them.”
In short, they were the ones empowering people over algorithms, and preventing them from suffering AI-induced harm. But, in a display of seemingly mindless behavior, Musk laid off the whole META team without prior notice.
There’s no one left at Twitter that has the ability—or intention—to stop the algorithm from taking an unintended bad turn and flooding the feeds of people with AI-powered disinformation (to influence an election, for instance…)
Did Musk fire all META members as part of the pervasive dismissal of ethical AI people? Maybe because they didn’t write enough lines of code last month (which seems to be a common metric to decide the value of an engineer)? Or did he simply make a miscalculation and is now trying to go back?
As far as anyone can tell, he did it knowingly and on purpose. My generous guess is that he applied his Tesla-PR approach to AI ethics: “there must be a zero-cost solution to this” (the other option is negligence). Maybe he thinks the new Twitter Blue will be enough to combat bots and transform Twitter into the town square of accurate information he envisions.
What I see, instead, is that Twitter has fewer barriers than ever for AI-powered misinformation tools to further blur the frontier between truth and fake. And it’s us who will pay the consequences—much more costly than $8/month.
Hello, I appreciate your in-depth take on Twitter and the information crisis. I'd like to note one thing: your reference for the statement that Twitter is a main source of news and information includes data solely from the US. It's a reference to a Twitter post, and the company is to blame for such narrow focus. There's a big world beyond US borders, where the structure of information sources might - or might not - look differently.
I know this might sound like nitpicking, but we need to do our best to complicate the picture at planetary scale. This is the scale at which Twitter functions, and it includes multiple public spaces, some very different from the US public space. This is especially relevant that norms (free speech!), metaphors, implicit biases that are common in this space are constantly pushed onto the rest of the world.
Google+ used to have a feature called Collections. The idea behind it was that users would categorize their content into different buckets, i.e. politics, sports, memes, or whatever. That allowed their followers to subscribe or unsubscribe from individual collections based off of their interest rather than having to completely unfollow someone if you hated reading their posts about sports or something. I’ve often thought that the entire thing was a strategy by Google to get thousands of people to train their algorithms by having them manually categorize text and photos. Regardless, it did provide for a great end user experience.