Have you also noticed that AI ethics people are more silent lately or is it just me?
Did they go elsewhere because Twitter AI grew apathetic to their often unwarranted hostility? Have news magazines stopped calling them to focus their reporting on model capabilities instead? Are AI echo chambers settling and it just happens that I’m in a different one? Has AI ethics failed to achieve its admittedly relevant goals due to its misdirected efforts?
I’m not sure what caused their disappearance from my radar — perhaps it’s a mix of the above — but their quietness is so apparent now that it’s become quite loud suddenly.
I think AI ethics’ purpose was a laudable one but I also sort of feel that they failed the execution. The substance was there but the form was mistaken. This is my attempt at understanding what happened to them and what they should’ve done differently.
The short life of a stochastic parrot
I remember the stochastic parrots’ paper in March 2021. It was a sensation, producing a huge wave of (deserved) anti-hype against large language models in a way that defined the conversation for the next two years.
The paper described an exhaustive list of the limitations of GPT-like systems. Some powerful voices resisted but the arguments were compelling enough that the AI ethics movement gathered an unprecedented amount of followers.
If an enthusiastic newcomer showed you a GPT-3-made Shakespearean sonnet or a Lil Wayne rap about Harry Potter, you said: “Yes, but GPT-3 is just a stochastic parrot. It doesn’t understand a word it’s saying.” And the debate was over. You’d won.
This went on for a while. Debates about then-future AI models went like this:
“Oh, so you are cheering for the next GPT? Don’t you know it’s not going to be that great? I mean, it won’t be able to reason regardless of the amount of data OpenAI throws at it. LLMs’ limitations are obvious by now.”
“How can you be so sure? it’s not even releas…”
“Stochastic. Parrot.”
“Okay, I too read the paper, but…”
“PARROT.”
People overused the term and its power. They divested it of all meaning as a useful pointer of AI’s flaws and turned it into a weapon. Armchair experts kept delighting us with a display of “AI knowledge”, seemingly getting ahead on Twitter debates, while OpenAI and other AI startups got ahead where it mattered: building more AI.
With an enviable trust in the scaling laws (and their ability to outlive them if necessary), companies released model after model, reaching new performance heights. AI’s apparent competence grew toward a ceiling it seemed to never meet, peaking with the much-awaited GPT-4.
Stochastic parrots had become super parrots.
The evidence was overwhelming. GPT-4 screenshots led to goalpost moving. Conversations about AI’s reasoning skills changed the tone to give way to cautious consideration, even among skeptical veteran researchers.
The “stochastic parrot” epithet originally referred to problems that still exist but its rhetorical blade went blunt. At the very end of its life, it was merely a sign of partisanship. Nowadays, if you hear someone say it you just think they are, ironically, parroting the phrase unaware it’s obsolete.
The great mistake: They dismissed the promise
I brought up the stochastic parrot story because it’s the epitome of the worst mistake AI ethics, as a group, made: They went against AI from all fronts.
The fight became unmanageable for them (fighting against big tech and emerging AI startups as well as VCs and even government officials with ties to Silicon Valley). But most importantly, in trying to tie up the AI conversation with a very long chain made up of all the arguments against AI they considered sound, they forgot that a weak link could throw it all away.
So it went. They mixed valid arguments favoring governance, transparency, and responsibility with attacks on AI as a technical innovation and as a technological promise:
“AI companies are unethical because they scrap the web to gather data created collectively to gain a profit from selling us back that data regurgitated by costly software.”
“Also, ChatGPT is not that good, actually.”
It became trivial to reject the whole idea of AI ethics. They redacted elaborate arguments, raising valuable criticisms, just to let their contempt toward the “how” of modern AI blur into the “what”. Their views turned into coping-motivated proselytism. They’d try to paternalize others into thinking AI was going nowhere.
But they run into their weak link: You can’t steal people’s truth from them.
Generative AI was always different from previous forms of AI in a fundamental sense: We can use it firsthand. We don’t need to decide which expert to trust but form our own perceptions of reality.
I’d read Emily Bender’s lessons about LLMs, worried that people might confuse their human-sounding prose for sentience. To assess her views I’d log in on ChatGPT to see for myself just how deluded you had to be to believe it could be of any use, professionally or personally.
Playing with the chatbot I’d realize, a few times per session, how misguided and wrong AI ethicists’ disdain for AI’s potential value has always been.
And then, the rest of their beliefs just went with that one.
AI ethics’ unexpected ascent into notoriety…
AI ethics peaked after Google fired Timnit Gebru first, in late 2020, and then Margaret Mitchell in early 2021.
Up until that point, AI ethics had been a rather niche branch of AI research, namely, how to make the technology so that it was not a social hindrance but a helpful addition to people’s lives.
Can hardly think of a more noble purpose than that.
After Google’s broadly broadcasted (and heavily criticized) decision, people began to recognize the importance of applying a framework of ethical principles to develop and deploy AI systems into the world. Even its most fervent detractors were forced to acknowledge it was a big part of the conversation now.
This progression from niche to mainstream was, as far as I can tell, a net positive.
It’s good that tech companies are socially pushed to do things for the collective well-being instead of only to fulfill their shareholders’ interests. It hadn’t happened with social media and we were unwilling to make the same mistake, so the pressure worked.
But only superficially, because AI ethics didn’t really have that much power within companies. Every time an ethicist’s view clashed with the company’s goals, it was disregarded.
That’s what happened to Gebru and Mitchell. They were just trying to do the work they’d been hired to do. But Google never wanted them in the first place and it wasn’t willing to let them do their thing anyway.
…and tragic descent into oblivion
Counterintuitively, Gebru and Mitchell’s firings were at first a huge win for AI ethics. Only after the press extensively covered the events did the awareness that ethical principles were a requirement to do things the right way spread far and wide.
I say “at first” because eventually, it was the beginning of the end. AI ethics was sentenced the very moment it became popular.
AI ethicists realized they had amassed a notorious influence. They became opinion makers. People were listening. I was listening (as you can tell from my earlier TAB articles on the topic).
Now, although I try to listen still, I can’t hear them. They’ve lost the emotional push that gave them a megaphone in social media and the main outlets because they decided to go against the technology itself.
This mistake allowed AI companies, which had been praying for the Overton window to move against AI ethics, to disband their responsible AI teams. Two of the biggest players, Microsoft and Meta, did it earlier this year. Smaller labs like OpenAI or Anthropic never felt enough pressure to set up theirs in the first place.
Big tech even managed to mix up AI ethics and AI safety goals to confuse governments into believing long-term safety-centered risks are more important than current ethics-based harms.
AI ethicists spent the past years voicing relevant concerns about the dubious practices of AI companies. People applauded. Then, they jumped to convince us that AI was going nowhere. But we had ChatGPT.
They bet against AI.
The first rule of AI is you never bet against AI.
It's a mistake to think that AI ethics means to bet against AI. It's betting that AI can be a force, if properly used, to help people thrive and flourish. AI ethics is very much present in the movement for "Human-Centered AI" and in the efforts behind a "Humane Technology". Any one who thinks that AI should serve to augment, enhance, and improve human agency is an AI ethicist at the core.
Your logic is hard to resist. It is unfortunate the way the AI ethics hand was played. I personally think the “everyone’s AI truth” is an important driving factor in the diminishing of the spirit of the cause, but as you highlight, the primary driver is that these companies serve as the primary locus of innovation and thus power.