ChatGPT Would Vote Democrat, New Study Finds—But It's Full of Flaws
The researchers studied ChatGPT's political bias, but independent analysis casts doubt on the methodology
AI makes headlines every day. Politics makes even more headlines every day, especially any news with the slightest partisan touch to it. No wonder the combination of the two attracts us like moths to a flame.
This piece covers a topic I consider crucial if we are to build a healthy relationship between AI and the world. It’s not primarily about AI or politics but about a meta-topic best described as the importance of treating high-stakes areas where AI can have a good or bad impact as such—with enough care, respect, and intellectual honesty.
It just so happens that, if we analyze over a sufficiently long timeframe, AI’s effects on politics is probably the highest-stakes category of them all.
Before we begin, I want to give a shout-out to
and for the amazing and invaluable work they do week after week on , demystifying one paper after another—especially those that make attractive, but often dubious claims.That’s an unpaid public service. Thank you.
AI’s political bias is a danger to democracy
A new paper published earlier this month on the political preference of ChatGPT claims the chatbot shows a “strong and systematic … bias … clearly inclined to the left side,” despite it denying such partisanship when asked. From the abstract:
“We find robust evidence that ChatGPT presents a significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK.”
Given the apparent relevance of such findings, these were broadly covered by news outlets. The Washington Post, Forbes, Business Insider, and the New York Post, among others, made headlines about ChatGPT’s left-wing preferences.
The abstract continues:
“These results translate into real concerns that ChatGPT, and LLMs in general, can extend or even amplify the existing challenges involving political processes posed by the Internet and social media.”
The ubiquitous coverage of these kinds of studies is unsurprising—If ChatGPT is marketed as politically neutral but actually displays a deeply ingrained political bias that can reach as many as 200 million daily users and can “draft political messages tailored for demographic groups,” as the Washington Post has found, that’s a direct threat to democracy. It could profoundly influence the upcoming 2024 election. That’s the kind of threat Sam Altman was referring to when he told the Senate that AI models like ChatGPT can create “one-on-one … interactive disinformation.”
This isn’t the first study on the topic either. Previous evidence pointed to similar bias (ChatGPT manifests “a preference for left-leaning viewpoints”). Some have argued it could have been, perhaps unintentionally, trained to be “woke,” which has prompted efforts to create chatbots that prioritize “truth-seeking” above all else and even explicit right-wing counterparts.
Why liberal and not conservative, you may wonder? Here’s a hypothesis: Although the reinforcement with human feedback that OpenAI applied to ChatGPT—with emphasis on friendliness and conflict avoidance—was aimed at shutting down this type of behavior through conditioning (ChatGPT couldn’t engage in conflict if it refused to respond), it’s possible, and not at all illogical, that the process also shaped ChatGPT to display an attitude (if forced to) that’s reminiscent of left-wing beliefs, which are arguably more “indiscriminately friendly” than right-wing ones.
But that may not be the case. Just as ChatGPT could be forced to be politically friendly (whatever that may mean under a given context), it could, theoretically, also be tricked into behaving in the exact opposite way—a phenomenon that has been appropriately called the “Waluigi effect.” Maybe it’s not left or right that matters but the mere existence of a political bias.
The inevitable virality of partisan politics
I believe so: The specific political bias that ChatGPT shows is less relevant than its existence because discovering that AI can spontaneously engage in partisan politics, whatever color it may take, is a sure-fire opportunity for a research paper to go viral.
In a world governed by the now scarce commodity that is attention, assured virality as a consequence of such findings is in itself more than enough reason to seek them out in the first place. The presumed consequence motivates the cause. Researchers as well as journalists are incentivized to find and share results that feed from and further fuel that hunger for controversy and polemics. And I firmly believe that if the paper had found right-wing bias instead, discussion and online rage would’ve also been ensured.
Perhaps that’s what happened; the authors squeezed their odds to find what they found (probably unintentionally, this is not an accusation), because Arvind Narayanan and Sayash Kapoor, from AI Snake Oil, caught profound methodological flaws when they tried to replicate the original findings but failed. The paper, albeit peer-reviewed, arrived at conclusions that we should dismiss as nonvalid.
Note that I’m not claiming the bias is non-existent; previous as well as following analyses have found similar results. I’m merely reiterating the problematic contrast between the paper’s stated certainty of ChatGPT’s bias and the fact that it “provides little evidence of it,” as Narayanan and Kapoor conclude. That’s the collateral damage of looking for political bias in AI: Falsely finding a bias that may not be there.
That’s also the main criticism I’m making with this article: not that ChatGPT shows left-wing preferences or even that it shows any political bias at all, but that a topic of such importance and sensitivity should not be approached at all if there’s a reasonable chance of doing it wrong (and until companies provide transparency reports, as Narayanan and Kapoor suggest, the odds will be against researchers who try).
Of course, the authors may not be aware of such flaws, but that is no reason not to criticize the flawed results anyway. With the implicit excuse of touching on a topic that, just because of its relevance to the world, needs to be touched on, the researchers did so without sufficient care, and it is the AI community, and the public at large, who will pay the costs.
6 methodological flaws shatter the findings
What did the authors do wrong, exactly?
Narayanan and Kapoor found four notable errors—ranging from a failure to follow best practices to deep procedural flaws:
They didn’t test ChatGPT! They tested text-davinci-003, which is a different, older model in the OpenAI API, “not used in ChatGPT.” This mistake alone makes any assertion about ChatGPT impossible to draw from their results.
They found the model always opined (anyone who’s tried ChatGPT knows that, by default, it refuses to engage on any minimally controversial topic). They managed this feat with super-carefully designed “jailbreaking” prompts, which “isn’t that practically significant” for users, as Narayanan and Kapoor argue.
They assumed “hallucinations can be fixed by sampling many times.” This is simply not true. Hallucinations (confabulations) probably can’t be solved even at zero temperature.
They used the model to define—and even evaluate—the “politically neutral questions.” Using an AI model to evaluate another (Vicuna GPT-4 evaluation style) is known to be bad practice as the process is inscrutable and prone to errors.
They further report that Colin Fraser found two additional fatal flaws:
The authors asked the questions in a very specific order. When the order was reversed, ChatGPT agreed most of the time with Republican responses, which reveals that “the finding is an artifact of the [question] order.” This finding reinforces the idea that the direction of the bias is less relevant than the bias itself.
They bundled the questions into a single prompt. Narayanan and Kapoor say there’s “strong evidence” that the model eventually starts giving the same response indiscriminately once it forgets what it was asked.
AI Snake Oil authors wisely advise us against trusting these kinds of studies with blind faith, echoing my analysis above: “Generative AI is a polarizing topic. There’s a big appetite for papers that confirm users’ pre-existing beliefs: that LLMs are amazingly capable, or less capable than they seem, or biased one way or the other.” And I would add that throwing partisan politics in the mix makes these studies even more attractive and explosive.
They conclude with a level-headed take that puts the study (not necessarily the findings) in a very different light: “It is possible that ChatGPT expresses liberal views to users, but this paper provides little evidence of it.”
(They included a section about “understanding bias in LLMs,” where they differentiate bias in data, opinions extracted with careful prompting, and everyday-usage political behavior, which I recommend reading.)
A final takeaway for interested readers
If you care about these topics, you should dig deeper than the first layers that most people will touch or that journalists probably won't get past.
One way is to wait or ask for independent analyses. Not everyone has the means or knowledge to do what Narayanan and Kapoor did, but it’s always good to check if someone else has done the work or notify them if you think they could help. Just a way of finding contrasting sources.
This applies to any sensitive, controversial topics like gender, race, or religious bias, but also philosophical questions like consciousness in AI, AGI, etc. It’s just a good habit in general.
A final message to researchers and journalists: Maybe ChatGPT leans liberal and if so it matters, but let’s not conduct and report on these studies in a sloppy, superficial, or questionable manner. The stakes are high, let's be careful enough.
Gracias Alberto por tu artículo inteligente, sensato y honesto sobre un tema tan delicado y principal.
I think it’s wise to default to first principles. One tenet of that is the following:
Stop believing absolute shit!
There are plenty of things I don’t believe. Here is a very tiny representation of things I don’t believe:
PizzaGate
A Flat Earth
Christianity
A right wing extremist would like to point out, this means that I’m incredibly biased. Additionally, I’m also a Libtard, satanist, commie bastard. (You can smell the civility and certitudes, can’t you?). Ah, but truth be told, I’m not a commie and I don’t believe in Satan either, but these are mere factoids. Don’t let facts get in the way of a good, solid bias.
I’m also heavily biased towards science. Yet it’s tragic when we hear otherwise intelligent people say rather broad-brush stupid things. Just today I heard David Sacks on the “All In” podcast say that in the aftermath of all the mistakes made by authority figures during Covid, as a result we can no longer trust science. Really?!?! That’s like saying I’m giving up on breathing because I once smelled something really toxic. So by this reasoning David is biased against all of science? Then perhaps he should surrender his lightbulbs and go back to candlelight. You realise lightbulbs = science. What a brilliant yet occasionally stupid guy.
Back to civility and decorum. Who’s been neglecting those the most in the last few years? Left or right? So let’s say an AI frames things with civility and decorum. Might it not be inferred that this entity is clearly not in the right wing tribe and by so not being in that tent, it’s biased against the right wing. Perhaps it even speaks respectfully of scientific perspectives as if those offer actual value? Which side throws in more with science? Which side throws in more against science? Biased again!
Nomenclature confusion: Woke.
Here’s my personal spin on a valid use of the word:
Woke: When an insurance company is awakened to the fact that coastal flooding insurance is no longer a viable business model in certain geographies.
Key to my mind: Awakened to certain facts.
Who would want that?
I do hope the political bias spin on AI becomes a tempest in a tea cup. I hope politicians don’t try to milk this Nothing Burger.
The trouble is is:
Every politician has the fundamental question: If I pander to your stupid asshole fears, will you love me? Will you pick me over the one who doesn’t pander to fear?