Don't Let Them Steal Your Election
Five ways the bad guys can use AI to hinder the US (or any) democracy
As a foreigner, it's been a thrill to watch the events leading up to the US elections unfold. As a writer focused on AI, not so much.
Lest I become a fear monger, let me say that no one needs vanguard innovation to spread lies and issue convincing propaganda. Social media channels of distribution matter more than ChatGPT and they're not new.
I will admit, however, that text and image AI tools that came after 2022 can turn an election day into the worst nightmare for democracy advocates. Algorithms, old and novel, paint misty the prospect that awaits us in the months before the US election in November. Or any election, anywhere, for that matter.
We didn't prevent this risk because one, tech companies were too busy making money, and two, in the last few years AI evolved way faster than we were prepared for. In some ways, it's still quite dumb (e.g. trying to win at Tic-tac-toe), but it's good enough at telling stories and drawing pictures to upend an otherwise democratic process.
This election is the first time we face the danger of post-ChatGPT AI in a high-stakes sociopolitical scenario. That is new.
Bots pass the Turing test (which means they write and speak indistinguishably from real people). They're also more persuasive. Image and video generators can make realistic faces, which people use mostly to make jokes but also to plant doubt.
We will eventually adapt to perception-altering algorithms but for now, they're a big unsolved problem. Here are five ways the bad guys can weaponize AI to influence the US democratic election.
I. Deepfakes
A deepfake is, strictly speaking, a fake image or video created with a deep learning technique (Midjourney and OpenAI’s DALL-E are deep learning tools). Deepfakes exist in a spectrum from “This looks like a weird Photoshop collage” to “Why nobody told me the Pope's rocking a Balenciaga?”
In 2018 it took daunting work to make Barack Obama say things like these. Nowadays it's trivial to do that with Kamala Harris.
Among deepfakes that are high quality, the crazy evident ones are funny memes. However, the subtle I-can’t-tell-if-this-is-true type—the most dangerous in my opinion—are powerful propaganda vectors. You don't need video generators (which are still bad). Tools that alter voice and lip gestures suffice.
This one of Joe Biden is pretty evident to be fake but what about more crafty ones? Does the Harris one—that Elon Musk shared on X without the required “manipulated media” warning—count?
A couple of words out of place is a subtle change but if they twist the meaning so that a million people are suddenly enraged at the speaker, then it's an impactful subtle change. Even the obvious deepfakes are powerful, if not as means of deception, as means of expression. As Daniel Immerwahr wrote for The New Yorker, “the problem with fakes isn’t the truth they hide. It’s the truth they reveal.”
You're surely thinking, “I can always tell!” but please, don't make me post that annoying survivorship bias WW2 airplane (okay, you asked for it). No, most people would be fooled by the latest AI-generated images. In time you and I would, too.
You don't know what you don't know.
II. Bot swarms
One bot does nothing. Two bots do nothing. Half the population of a popular network being bots kills the entire platform.
Once the swarms spread to other places and the virus pollutes to death the information ecosystem, the trust first, check later phase of the web will be over.
Statista estimates that in 2022 bots accounted for about 50% of internet traffic and among those, more than 30% were “bad bots”, i.e. scammers, spammers, scrappers, crawlers, etc.
That's before ChatGPT.
What about bad political bots? They’re seemingly flooding social media with propaganda already. This year, OpenAI caught Russian, Chinese, Iranian, and Israeli actors using ChatGPT-like tech with the goal of pushing “covert influence operations”).
AI works rather well as fuel for national ideological propaganda campaigns—better than humans.
On the one hand, chatbots are more and more persuasive; on the other, tech companies—even those with good intentions—have all your data. Persuasion and knowledge: Imagine what a psychopath with a charming personality who knew you better than yourself could get out of you. Everything.
This recipe of factors comprises a succulent target for intelligence agency hackers of foreign powers—and your own. It happened with Cambridge Analytica.
When you don’t know who you’re reading online, the chances it’s a bot has gone up dramatically over the past few years. Bad actors can cleverly disguise bots as concerned citizens who argue in favor of one or other candidates. And then you read them. And they affect you.
And then you vote.
Propaganda campaigns of this kind don’t need ChatGPT, true, but the cost of using AI models has dropped to zero while quality remains high.
Cheap, powerful, invisible, and omnipresent = a foe to be reckoned with.
III. TikTok
You don't know how bad TikTok is for your sanity until you go down the rabbit hole.
It is a trap for your attention but also a trap for your perception of the world around you. It's optimized to keep you signed in at any cost, and that includes making you think the world agrees or disagrees with you much more than it does.
They get engagement, you get enraged.
Add to it, since we're talking US election, that TikTok’s owner company, ByteDance, is Chinese. That's why the US government has been debating a ban or a forced sale for years.
Politico ran a story about how “The Chinese government is using TikTok to expand its global influence operations to promote pro-China narratives and undermine U.S. democracy, according to a report released today from the Office of the Director of National Intelligence.”
Would you allow your geopolitical enemy to spy on your kids—no, to hack your kids' preferences and your peers’ worldviews?
You can still use TikTok because it’s hard to determine if the Chinese government has—or needs—any influence over TikTok’s content. The truth is the platform’s addictive algorithm keeps your mind numb and your attention span below that of a goldfish by itself.
TikTok is, whether under the CCP’s control or not, a dangerous vector of misinformation if not outright political propaganda.
You should run from it.
IV. X’s algorithm
Looking for your enemy outside is trivial. But sometimes the danger comes from within your borders. Twitter, once respected (if you allow me such a strong descriptor) as the world's town hall, is now a private company named X, owned by Elon Musk.
If you thought his embarrassing antics and his stronghold over staff would be the major problems after the change, you were wrong.
Look, I'm a Spanish guy with not much at stake in the US election. This isn't a partisan write-up (although I have my preferences, which I'm not concealing). But, even if you support Donald Trump, you gotta admit that Elon Musk publicly endorsing him on a social platform people use to get informed is condemnable, if not legally, at least ethically.
Don't get me wrong, he's free to claim his stance loud and clear. What's less acceptable is that he controls the algorithm so that his tweets appear in everyone's feeds. It's not crazy to suspect that he also possesses some unknown power to decide who gets suspended (or banned) and who doesn't.
Or even, as he'd happily accuse the Chinese of doing, steering the political sentiment at will.
It isn't news that Musk supports Trump. He low-key always did. But after the assassination attempt, he went all out. He reportedly commits $45 million/month for the GOP’s campaign. He's gone as far as sharing, without disclosure of manipulation, a fake ad of Kamala Harris’ campaign, which violates his own platform's policies of synthetic media and misleading identities.
“But he's committed to free speech!” In practical terms, Musk's “free speech” means (besides community notes, which I think is a nice addition) that fake news and hoaxes spread more than ever—even against his preferred direction. And that's without counting when he participates in the disinformation himself.
What has any of this to do with AI?
Not unlike TikTok, Twitter is governed by a recommender system, which is powered by an AI algorithm that decides what to show you to keep you in the app for longer. Tragically, these companies soon realized that rage bait works best to make people react and engage.
Musk unapologetically allows that kind of content to flourish so that the platform that was once useful and valuable barely survives under the shadow of its former self.
I'm not going to do it, but he could be accused of interfering in the elections directly with his posts and indirectly with his social platform. Concerning.
My hot mild take: The algorithmic feed is the worst invention of the 21st century.
V. The Liar's Dividend
This one’s surely Trump's favorite.
Every time I've seen this phenomenon mentioned in relation to US politics, he's somehow involved. Wonder if they named it after him.
The liar's dividend is, in a way, the opposite of a deepfake. A deepfake is a made-up image purported as truth whereas the dividend of the liar entails calling out as false information something that's true—you know, “you're fake news!”—ascribing blame to AI technology.
Anyway, the last of Trump's ridiculous efforts to stain Harris' campaign was calling her a cheater because, he says, the crowd that received her at the airport was fake: “She ‘A.l.d’ it, and showed a massive ‘crowd’ of so-called followers, BUT THEY DIDN'T EXIST!” (It's a pity for him that plenty of videos reveal the crowd was real.)
Anyone who's aware of the state of the art of AI image and video generation—and it's fair to say Trump kinda knows given that he explicitly said it was done with AI—will realize it's impossible to create a realistic video with thousands of people.
BUT, it's always possible to confuse a few people with a vague accusation that mentions high tech. It's possible to convince those who are willing to believe Trump on his accusations that something fishy is going on.
The liar's dividend.
Trump's short-lived attempt to spread distraught about “election interference”—which somehow always targets him—was actually intended to plant doubts among his followers with a somewhat plausible lie.
That's the hidden ace up the sleeve of the deceitful and the untrustworthy. Something that only he has used so far.
I’ve finally begun to spot a few AI accounts right here on Substack, which must be tricky to pull off given Substack doesn’t have an API as readily available as reddits or twitters for example.
Here are some ways to avoid getting tricked.
Look at post frequency: AI can post an essay every hour and a comment every minute regardless of time of day.
Use discernment: check in with your body and gut. Is the person you’re talking to driving you deeper into alienation, division, hatred, outrage and triggering emotions? You might be getting gamed!
Note essay structure: Beware any essay that uses lists with summaries for each point followed by a colon (like I just did lol).
great essay, but What?! Trump is the only one that has used the Liar’s Dividend? C’mon man.