Have you also noticed that AI ethics people are more silent lately or is it just me?
Did they go elsewhere because Twitter AI grew apathetic to their often unwarranted hostility? Have news magazines stopped calling them to focus their reporting on model capabilities instead? Are AI echo chambers settling and it just happens that I’m in a different one? Has AI ethics failed to achieve its admittedly relevant goals due to its misdirected efforts?
I’m not sure what caused their disappearance from my radar — perhaps it’s a mix of the above — but their quietness is so apparent now that it’s become quite loud suddenly.
I think AI ethics’ purpose was a laudable one but I also sort of feel that they failed the execution. The substance was there but the form was mistaken. This is my attempt at understanding what happened to them and what they should’ve done differently.
The short life of a stochastic parrot
I remember the stochastic parrots’ paper in March 2021. It was a sensation, producing a huge wave of (deserved) anti-hype against large language models in a way that defined the conversation for the next two years.
The paper described an exhaustive list of the limitations of GPT-like systems. Some powerful voices resisted but the arguments were compelling enough that the AI ethics movement gathered an unprecedented amount of followers.
If an enthusiastic newcomer showed you a GPT-3-made Shakespearean sonnet or a Lil Wayne rap about Harry Potter, you said: “Yes, but GPT-3 is just a stochastic parrot. It doesn’t understand a word it’s saying.” And the debate was over. You’d won.
This went on for a while. Debates about then-future AI models went like this:
“Oh, so you are cheering for the next GPT? Don’t you know it’s not going to be that great? I mean, it won’t be able to reason regardless of the amount of data OpenAI throws at it. LLMs’ limitations are obvious by now.”
“How can you be so sure? it’s not even releas…”
“Stochastic. Parrot.”
“Okay, I too read the paper, but…”
“PARROT.”
People overused the term and its power. They divested it of all meaning as a useful pointer of AI’s flaws and turned it into a weapon. Armchair experts kept delighting us with a display of “AI knowledge”, seemingly getting ahead on Twitter debates, while OpenAI and other AI startups got ahead where it mattered: building more AI.
With an enviable trust in the scaling laws (and their ability to outlive them if necessary), companies released model after model, reaching new performance heights. AI’s apparent competence grew toward a ceiling it seemed to never meet, peaking with the much-awaited GPT-4.
Stochastic parrots had become super parrots.
The evidence was overwhelming. GPT-4 screenshots led to goalpost moving. Conversations about AI’s reasoning skills changed the tone to give way to cautious consideration, even among skeptical veteran researchers.
The “stochastic parrot” epithet originally referred to problems that still exist but its rhetorical blade went blunt. At the very end of its life, it was merely a sign of partisanship. Nowadays, if you hear someone say it you just think they are, ironically, parroting the phrase unaware it’s obsolete.
The great mistake: They dismissed the promise
I brought up the stochastic parrot story because it’s the epitome of the worst mistake AI ethics, as a group, made: They went against AI from all fronts.
The fight became unmanageable for them (fighting against big tech and emerging AI startups as well as VCs and even government officials with ties to Silicon Valley). But most importantly, in trying to tie up the AI conversation with a very long chain made up of all the arguments against AI they considered sound, they forgot that a weak link could throw it all away.
So it went. They mixed valid arguments favoring governance, transparency, and responsibility with attacks on AI as a technical innovation and as a technological promise:
“AI companies are unethical because they scrap the web to gather data created collectively to gain a profit from selling us back that data regurgitated by costly software.”
“Also, ChatGPT is not that good, actually.”
It became trivial to reject the whole idea of AI ethics. They redacted elaborate arguments, raising valuable criticisms, just to let their contempt toward the “how” of modern AI blur into the “what”. Their views turned into coping-motivated proselytism. They’d try to paternalize others into thinking AI was going nowhere.
But they run into their weak link: You can’t steal people’s truth from them.
Keep reading with a 7-day free trial
Subscribe to The Algorithmic Bridge to keep reading this post and get 7 days of free access to the full post archives.