Why No One Wants AI Ethics Anymore
And what they should do to recover the credibility they've lost
I respect and admire researchers and thinkers who are devoted to assessing and understanding the sociopolitical aspects of AI. In particular, those who belong to the group now called “AI ethics.” You may feel an instinctive dislike toward that label; I don’t blame you. And if you do, you will probably identify with what I have to say today. I’m not here to criticize their goals, though—that’s precisely what I respect and admire about them—so let me first honor the singular value I think they bring to the AI community, without which it’d be a much worse place.
AI ethics is a net good for the world
Anyone who wants AI to improve the world should, if not admire, at least respect AI ethicists’ work—even if only in intent. Not just because they’re pretty much the only ones challenging big tech companies to prevent them from making the same mistakes with generative AI they made with social media; or because they’re doing so while resisting powerful opposing forces that relentlessly try to stop them. No, if they should be universally respected is because of an apolitical reason: the goals they chase are directed toward improving the world’s collective well-being. There’s hardly a more noble purpose.
You may not agree that the short-term AI risks they aim to tackle are the most urgent. In the end, discrimination, bias, and underpaid labor tend to affect minorities disproportionately. If you’re not part of them—and, by definition, you most likely aren’t—it’s hard to see how this can top longer-term risks that may affect everyone. The latter are, strictly speaking, more “existentially serious.”
(With “may affect everyone” I’m not referring to the fear that AI could kill humanity one day but to more realistic threats like unbounded misinformation or a transversal attack on the creative workforce. I acknowledge that generative AI is merely a new vector on these issues—which not only preceded it but are “political rather than technological,” in the words of author Stephen Marche—but it could worsen them in ways we can’t prepare for).
Anyway, I, as a white male living in an EU country, belong to the group that’s least discriminated by AI systems so by default I lean toward a focus on global issues. AI ethics is also involved in helping with those—if they choose to emphasize harms most people don’t (can’t) relate to, it’s precisely because we don’t. Who else is going to stand up for those without a voice? I can’t relate; I’m writing this and you’re reading it. I have to make an effort to connect with some of the stories AI ethicists share on social media and in tech magazines. But I try. And in trying I sometimes manage to empathize: For some people, these “lesser” problems are as existential as the ones Hinton, Harari, Yudkowsky, and others are warning about—and actually real.
And that’s enough for me to recognize the value of AI ethics work. In one sentence, it's a dual attempt to stop AI technologies from making the world more unequal while steering them to make it more equal instead. This description comprises the majority of AI ethics projects, within or outside companies, and reflects a laudable purpose, however you look at it (even if you disagree).
What’s potentially more objectionable are the methods they may use to achieve that goal—or convince us of its importance. That’s my focus today. In particular, I find serious flaws in their communication, in their all-too-common unwillingness to engage in polite dialogue with certain people with whom they disagree, and, perhaps most of all, in their stubbornness to adhere to outdated criticisms of AI systems that undermine their credibility, not only among technology experts, but in the eyes of the world at large: try to impose a truth on your audience and, in the end, no one will listen to you.
Ethicists’ dismissal of AI progress
When I started TAB, I used to quote prominent voices from the AI ethics corner all the time. There were enough hype and rosy-eyed prospects, so I felt it would be more useful to play contrarian, not only to recalibrate the tune but because I actually agreed with the caveats they were raising: they criticized the absence of accountability frameworks in the space, they raised concerns about the design choices behind the algorithms and how they (unintentionally) perpetuate bias, they denounced dubious practices of data gathering and the lack of consent or compensation to the creators, and they found evidence for AI services prone to discrimination deployed in the real world.
I still agree with the above. Yet, as honorable as their goals may be, I’ve noticed, like surely many of you, that their manners have degraded with time. I’m hesitant to quote them anymore because the hostility they show toward anyone who disagrees with them in the slightest makes me uneasy. (Of course, I'm referring to the visible leaders, not everyone. I won't quote them because people see the group as a monolithic entity; their names are irrelevant to the point.)
I don't recall having this sensation a couple of years ago; why have they turned more aggressive? Perhaps because they feel like they’re shouting into the void as the world keeps going and people are increasingly hyping—and getting hyped by—these systems? Could be, that’s definitely happening and it’s infuriating. Perhaps because they’re tired of fighting a war they can’t win, even when they’re right? It's discouraging how power can silence the loudest truth. But perhaps also because some of the central arguments they were putting forward to support their narrative began to crumble in the face of undeniable practical improvements, such as ChatGPT.
To sustain the thesis that corporations and greedy executives (and the decisions they make) are the true threat behind the algorithms, it’s necessary to first dismantle the illusion that AI systems have—or are on the brink of having—intelligence (or agency, thoughts, sentience, or the ability to set themselves free from our control). And to do so (which I agree with) the easiest path—the one they took—is to disparage any apparent progress and ridicule the capabilities of ChatGPT and its cousins (which I disagree with) by putting them in the worst possible light, bordering on outright contempt.
That’s the worst mistake AI ethics could make and the reason why all the other valid and essential arguments they’re making are being put, by virtually anyone outside the group, into the same “not worth considering” bag. This is a tremendous loss for the AI community. One that could have serious consequences on the increasingly irreversible unbalance of power in the industry. One that could lead policymakers, enticed by interested voices, to end up misplacing regulations and focusing them on the wrong issues.
The ‘Snake Oil Narrative’ of language models
“Stochastic parrot” was not the first but became the most famous of the many ridiculing metaphors that the ethicists wanted to get across to the general public. The paper that coined the term was widely read by supporters and detractors alike. Linguist Emily M. Bender et al. raised important points that remain true for all GPT-like systems despite the iterations that came afterward. But in no time the catchy analogy about the limitations of language models (LMs) turned into a weapon to signal one’s beliefs about the non-progress of AI. People soon began to say “stochastic parrot, so what?” and Sam Altman, OpenAI’s CEO, mocked the concept. It became an ideological flag emptied of meaning.
I could no longer use it. It was corrupted. Most people interpreted it as a rejection of AI advances—which are overhyped, yes, but not non-existent. Other metaphors followed. It seemed like a contest to find the perfect reductionist description of ChatGPT, a far more pathetic tool than the mainstream was portraying it to be: “autocomplete on steroids” or “super-autocomplete,” “blurry JPEG,” “information-shaped sentence” generator, and even a “‘say something that sounds like an answer’ machine.” I played the game too, with my “eraser of the implausible.”
I confess that I actually like these. They’re useful in a way: They provide a familiar example of what’s wrong with LMs, which, being so novel, don’t yet have a place in our mental repository of concepts. They capture their limitations, downsides, and shortcomings. But, in all their imaginative power, not even once did AI ethicists try to popularize an analogy that, while denouncing exaggerations, reflected the utility common people are finding in products like ChatGPT, Bing, Bard, Claude, or their open-source versions.
I believe it’s necessary to assess the failure modes of new technology, but not without ever admitting its perks and virtues. And that’s exactly what happened: As AI’s ability improved and companies allowed user access, AI ethicists insisted—without any sign they’d ever change their minds—that LMs were barely deserving of such contemptuous analogies. The tension and disagreements among ethicists became so untenable that eventually Kristian Lum, a professor at UChicago and involved with the Conference on Fairness, Accountability, and Transparency (FAccT), decided to speak up (quoted here in full):
“There’s one existential risk I’m certain LLMs pose and that’s to the credibility of the field of FAccT / Ethical AI if we keep pushing the snake oil narrative about them.
This isn’t to say that they don’t make stupid mistakes, hallucinate, or “just” memorize training data. They do; I'm convinced. But the idea that this negates what is obvious to anyone who has signed up for a ChatGPT account—that they are incredibly powerful—is absurd.
This also isn’t to say that they aren’t incredibly concerning—they are even more so if we view them not as bullshit but as a powerful tool with as yet to be realized impacts & potential (for harm). If anything, research on the impacts & risks of these models is even more urgent.
My point is that we’re past the point where the underlying tenor of the conversation can still credibly be about debunking writ large. (By all means, continue debunking for certain use cases if called for.)
If the field is going to survive this “disruption” (sorry, cringe), we need to meet reality where it is, not hold on to a narrative that is so easily falsifiable by anyone with a smart phone and 10 minutes to spare.”
While the purposes of AI ethics' work and the underlying truths they reveal are invaluable, the means they're using to get their point across don't work: It was never a good idea to criticize AI systems in their ability. It was a short-lived strategy. Pushing “the snake oil narrative” would backfire once the systems improved. And they did. And they became “incredibly powerful.”
Once people could test ChatGPT firsthand, there was no point in trying to keep calling it a stochastic parrot—even if, in some sense, it’s still true. Because, in that sense, it’s also trivial. The other (more profound) meaning—not just that ChatGPT is a stochastic parrot, but that it is no more than that—is “easily falsifiable by anyone with a smart phone and 10 minutes to spare.” This kind of motte-and-bailey fallacy is what philosopher Daniel Dennett calls a “deepity.” My favorite example is that a human is a bunch of atoms. True at the lowest level, but false when trying to encompass all that it means to be human: we are much more than that, just as ChatGPT is more than a stochastic parrot or a bunch of parameters (if you allow me the comparison).
My unsolicited advice for AI ethicists
The current discourse on the ethics of AI, focused on pointing out flaws without considering the big picture—what people experience—is dead. If they don’t course correct, they will consequently fail to communicate the most fundamental arguments. Those people need to know. And it’s sad; I thought AI ethicists applied this kind of “destructive” approach because they had lost hope of convincing anyone and are tired of doing proselytism. I no longer think that; they wouldn’t be fighting if that was the case. And they are.
I think the real issue is that they're catastrophically disconnected from the impression they cause in whoever stumbles upon their public statements without knowing much about any of this: about ChatGPT, about AI, or about the ways of the tech industry. They come across badly. They come across as lacking credibility. As a group that needs a constructed narrative to forward an agenda. And I know reality is much closer to the exact opposite.
(I want to say here that, as far as I can tell, there’s one notable exception (besides Lum and the others who supported her brave take): None of what I’ve written here applies to Margaret Mitchell, now at Hugging Face. I consider her a role model for how to do AI ethics and communicate appropriately.)
Here’s my advice for the rest: give up the aggressive discourse around the performance and “intelligence” of AI systems. It’s not worth it. Even if you’re mostly right, you can’t fight against the perception of millions of people. You can’t win. LMs, with all their flaws, are “incredibly powerful,” and with all their limitations, can be super useful. Focus instead on all the other points that you’ve been repeatedly—and rightly—raising: governance, data provenance, regulation, algorithmic discrimination, harmful deployment, and other social and ethical repercussions.
Lum tweeted this yesterday, possibly as an honest example of how she thinks you could communicate the good in LMs without stopping to denounce the cases that should be denounced:
“… I need to write myself a professional bio for a talk I'm giving and ChatGPT just wrote a draft that is *mostly* accurate and, importantly, not dripping with imposter syndrome like my self-written versions usually are.”
I think this attempt at finding common ground and embracing the good of LMs is one that not only the AI community at large would welcome but the world can relate to; and they’d be willing to listen to all the other—maybe less intuitive, and certainly more urgent—concerns you so passionately teach us about.
I don't think AI ethics, as a label, is dead. But now that regulations are starting to roll and concerns about existential risk are at an all-time high, we need you more than ever. You need to rise to the occasion. Because if anyone can lead AI's progress toward a better world for minorities, workers, users, and everyone else, it's you.
Thanks for the great essay. I would respectfully suggest that what you're feeling is the anxiety of the liberal humanist, the center-left intellectual who fundamentally agrees with the emancipatory goals of the far left, but regards their methods as counterproductive. As someone who got his doctoral dissertation on Rawls 25 years ago, and was sneered out of academia as a tool of the patriarchy and insufficiently radical, I share you concerns.
What I most admire about your piece is your refusal to succumb to contempt. This is the fatal flaw of the radical critique. Theorists of the far left construct a Manichean world where there are only two groups of people, and its their job to sort them (workers vs parasites, patriarchy vs feminists, woke vs. benighted). It is a politics fundamentally fueled by contempt. It is why following the MLK Jr or Camus playbook for liberalism is so difficult; it constructs a world of reasonable pluralism and requires you to view with respect those you profoundly disagree with.
You see the same familiar brush strokes across the blank canvas of every wave of technological innovation: VR and metaverse, crypto and NFTs, and now AI and LLMs. The same familiar heroes and villains, the same I-speak-for-the-voiceless rhetoric, the same snide dismissals. Unfortunately, and as you correctly point out, those obligatory openings moves against generative AI smack of intellectual dishonesty. Lum's tweets acknowledge that. I admire you defending your nuanced position; your fundamental sympathy for the project as a whole but your rejection of a politics that lacks the conceptual tools to truly grapple with what's going on. Keep up the great work.
Did a test a few days ago myself with Bard and GPT and a well-respected translation of the Bible. I asked the LLM questions about Bible quotes and it got them wrong. Both systems. It’s like they’re getting worse. And that should be the most basic text to have stored in the system.
I don’t get it.