24 Comments
User's avatar
Jim Preston's avatar

Thanks for the great essay. I would respectfully suggest that what you're feeling is the anxiety of the liberal humanist, the center-left intellectual who fundamentally agrees with the emancipatory goals of the far left, but regards their methods as counterproductive. As someone who got his doctoral dissertation on Rawls 25 years ago, and was sneered out of academia as a tool of the patriarchy and insufficiently radical, I share you concerns.

What I most admire about your piece is your refusal to succumb to contempt. This is the fatal flaw of the radical critique. Theorists of the far left construct a Manichean world where there are only two groups of people, and its their job to sort them (workers vs parasites, patriarchy vs feminists, woke vs. benighted). It is a politics fundamentally fueled by contempt. It is why following the MLK Jr or Camus playbook for liberalism is so difficult; it constructs a world of reasonable pluralism and requires you to view with respect those you profoundly disagree with.

You see the same familiar brush strokes across the blank canvas of every wave of technological innovation: VR and metaverse, crypto and NFTs, and now AI and LLMs. The same familiar heroes and villains, the same I-speak-for-the-voiceless rhetoric, the same snide dismissals. Unfortunately, and as you correctly point out, those obligatory openings moves against generative AI smack of intellectual dishonesty. Lum's tweets acknowledge that. I admire you defending your nuanced position; your fundamental sympathy for the project as a whole but your rejection of a politics that lacks the conceptual tools to truly grapple with what's going on. Keep up the great work.

Expand full comment
Charlotte Dune's avatar

Did a test a few days ago myself with Bard and GPT and a well-respected translation of the Bible. I asked the LLM questions about Bible quotes and it got them wrong. Both systems. It’s like they’re getting worse. And that should be the most basic text to have stored in the system.

I don’t get it.

Expand full comment
Alberto Romero's avatar

I don't think they're getting worse, but they're not designed to do well on those kinds of tasks. One example that's gone viral recently: a professor asked ChatGPT if some text he copy-pasted in the prompt had been written by it (his students' final essay, which was definitive for them to be able to graduate). ChatGPT invariably said yes and it's been later proved that it's false.

However, saying they're "autocomplete systems" or "stochastic parrots" is intended to minimize the things they can do well top highlights their limitations. In the right context, it's fine to use these metaphors, but not if their virtues are never acknowledged elsewhere.

Expand full comment
Charlotte Dune's avatar

I find their best use is to reformat things. For example, I recently used it to reformat a line numbered screenplay I’d coauthored into a short story and to remove all the numbers. It did it in a flash and it would have taken me at least a tedious hour.

Expand full comment
Alberto Romero's avatar

That's one of the best uses I see, too. Because first, you're by definition an expert on that in case the reformatting is about your work. And second, it's very easy to catch a mistake in case the system makes one (and it's probably less likely to make it in the first place).

Expand full comment
Katie (Kathryn) Conrad's avatar

Following these debates on Twitter and Mastodon, I think there are really a couple of key voices to which this critique especially applies. And I share your dismay. I too agree with their important interventions, but also agree that the tone and focus of their discourse over the last few months has made me want to back away slowly. And that’s a huge loss because their approach has divided the community, arguably more over tone than substance. To be fair, I can understand how frustrating it must be to hear Google’s Pichai talk about the need for AI ethicists after one has been fired by Google as an AI ethicist, or hearing “stochastic parrot” attributed to Sam Altman instead of oneself. Agreed that Mitchell is a good exception.

Expand full comment
Alberto Romero's avatar

Indeed, I was thinking about a few highly visible people that are perceived as the leaders of the AI ethics group (for good reason). "Their approach has divided the community, arguably more over tone than substance," this is exactly right. Many people disagree with the substance, but some are willing to engage in conversation. When their tone doesn't allow that, they simply get away. I'm sad about how this is distancing people who would otherwise hear attentively what they have to say. I'm sure frustration and tiredness are central to this behavior, though.

Expand full comment
Itai Leibowitz's avatar

Fantastic read! Being interested enough in AI, I agree with something you wrote in the intro to this series: "Friends and family know very little about AI and how it influences our daily lives."

I wonder how topics around AI ethics and its impact on society can reach family dinner conversations with the same ease and fluency with which I can share fun, daily ChatGPT examples (birthday poem, email to customer support, a trip itinerary, etc.)

Southpark had a fun episode about ChatGPT [1], bringing it to life with school and dating topics. And I imagine many people saw the Pope at Burning Man [2]. But we would need many more everyday examples pointing to possible risks+challenges to come close to the incredible benefits that are, indeed, "obvious to anyone who has signed up for a ChatGPT account."

Eh, without claiming to be an AI ethicist, here's one example: an AI-powered advice column that tries to humanize a few examples of potential AI impact on our daily lives - from AI bias in recruiting to secret affairs with ChatGPT: https://dearai.substack.com/p/ai-powered-relationship-advice-for-the-ai-age

Thank you for this article and this series!

[1] https://southpark.cc.com/episodes/8byci4/south-park-deep-learning-season-26-ep-4

[2] https://www.nytimes.com/2023/04/08/technology/ai-photos-pope-francis.html

Expand full comment
Alberto Romero's avatar

"I wonder how topics around AI ethics and its impact on society can reach family dinner conversations with the same ease and fluency with which I can share fun, daily ChatGPT examples" Very good question Itai, I don't think it's easy but worth trying for sure!

Expand full comment
Nicole Hennig's avatar

This expresses just how I’ve been feeling about their arguments. I want to support AI ethics but the constant criticism of generative AI... it’s just as you say. Thanks for writing this.

Expand full comment
Alberto Romero's avatar

Thank you Nicole, let's hope this situation doesn't degrade further!

Expand full comment
jazzbox35's avatar

This is spot on, very well argued, 100% on the money!!!

Expand full comment
Alberto Romero's avatar

Thanks Mike!!

Expand full comment
jazzbox35's avatar

AI is becoming as divisive as American politics, the problem being that the lunatic fringes tend to get all the attention. Also, I think some long time researchers just feel threatened by the AI nouveau riche.

Expand full comment
Michel Schellekens's avatar

Currently the swing is in the other direction: so many specialists claim that AI is life threatening and will come to haunt us. A lone voice of reason is LeCun. Most alarmists must be aware their claims are off he wall based on current progress. Paraphrasing a recent claim: "there is a chance AI will annihilate us, and the chance is close to zero". This vacuous claim made the news channels gobble up the first part. It makes me wonder whether the alarmist claims serve to detract from the criticism (which also swung too far as discussed in this article). If AI is seen as life threatening then this quiets criticism. It makes AI seem real and fully in place (currently it captures a slice of intelligence, detecting patterns). A position of power is created as those who claim to see massive danger ahead are the most likely port of call for people looking to do something about it. It is a great play if viewed as chess, but it hollows out trust in science. The seesaw between both parties is tiresome and it is sad to see it occur in a field I love. I respect your level headed contributions.

Expand full comment
Soumya's avatar

Hey Alberto, hope you are doing great!

I am Soumya from ByteBrief (bytebrief.co)

We love your newsletter and we have a good news for you!

We're also running a beehiiv newsletter based on AI with 19K+ readers, and we are inviting writers to share their best knowledge on any topic that you love on our next issue, completely dedicated for you!

If it is tech/AI, our audience will be more than happy to read your work on our newsletter.

We can explore cross-promotion opportunities, if you're interested?

Or if you have any other proposal, we can proceed with it too.

Contact us: hello@bytebrief.co

Thanks

Expand full comment
Camino's avatar

Muchas gracias por este boletín y los anteriores, información que vale oro para los que no conocemos mucho de este tema de la IA. Con tus boletines empiezo a comprender las ventajas y desventajas de la IA y los riesgos que podemos alcanzar en la humanidad si no le ponemos responsabilidad. Gracias

Expand full comment
Michel Schellekens's avatar

The over-hyped state of the field is distasteful as is is the over-criticism. The beauty of the contribution does get lost. I agree that this is a shame. Nice you point out who is more level headed amidst all this. I look forward to read more by M. Mitchell.

Expand full comment
Jacques Larose's avatar

U would have A for this article without the need to specify that U are a:

« white male ».

Expand full comment
Andrew Mayer's avatar

Could part of the issue be that those in the technical class are so focused on a singular set of current beliefs around a specific definition of race being at the center of every ethical harm that we’ve rendered ourselves unable to deal with any broader conditions that may be causing the negative outcomes we’re trying to solve for?

Data driven AI ultimately has no interest in our biases and beliefs of the current moment. And if you ask it the right questions it’s perfectly happy to reflect answers back to us that reveal fundamental truths about the deeper limits and flaws of humanity that may be too unpleasant or inconvenient to accept.

But until we accept them we can never begin to come up with a genuine way to move past them. Instead we’ll try and (and fail) to hobble AI so it can no longer reveal them.

Expand full comment
User's avatar
Comment deleted
May 17, 2023
Comment deleted
Expand full comment
Alberto Romero's avatar

I understand your focus Phil and agree that depending on the perspective we take some topics are more important than others, but it's worth writing (and reading) about other aspects of AI and technology in general. You'd be happy to know that it seems AI people like Altman are very willing to draw a parallelism between the current state of this tech and nuclear weapons in the mid-20th century.

Expand full comment
User's avatar
Comment deleted
May 18, 2023
Comment deleted
Expand full comment
Alberto Romero's avatar

And I thank you, Phil, for engaging with me and sharing your insightful arguments!

I agree that Altman talking about regulation isn't really what it appears to be. He wants to regulate the space now that he's already exploited the benefits of no regulation.

The thing is: what do you propose so that we stop this knowledge explosion? Don't you think that's inherent to what humans are? Because I actually agree with you that many of the things we might be able to know and discover and build aren't really helpful or good but could indeed get us closer to extinction (not necessarily AI-driven extinction).

What I don't see is how to change that or if it's possible even. AI ethics and AI safety (with barely any overlapping between them) both take as a premise that humanity goes forward and that direction entails knowing more.

I'm open and very interested in hearing arguments about this.

Expand full comment
User's avatar
Comment deleted
May 18, 2023
Comment deleted
Expand full comment
Alberto Romero's avatar

If we can't predict it, how can we talk about it? The things we talk about should be connected to reality somehow. If the assumptions that connect something like your "knowledge explosion" to our current reality aren't well described, it's hard to convince policymakers that it's worth thinking about it.

I think this is one of the problems with the arguments of existential risk in general.

Do you have a solution for this?

Expand full comment
User's avatar
Comment deleted
May 18, 2023
Comment deleted
Expand full comment
Alberto Romero's avatar

Comparing science and religion is quite a stretch (I understand the aspects in which you're comparing them, though). Yet, if the scientific community is an authority right now, is for good reason compared to why the different religions were before. The Enlightenment wasn't perfect but definitely better than what they had before in basically any sense imaginable.

Science isn't perfect by any means, but I think the problem you're highlighting isn't science per se. It's how we use the knowledge that science provides in the broader context of geopolitical pressures and technological advances that may allow a country to obtain an edge over its adversaries. Knowledge about nuclear fission and fusion allows for many things beyond atomic weapons (including, potentially, fusion energy plants). Knowledge about genetics, DNA, etc. allows for things other than genetic editing. The same goes for the science behind modern AI (e.g. the cognitive sciences give us a lot of knowledge about our brains, not just some vague indications for how to replicate it with statistical pattern matching).

If we imagine a world where science is disconnected from how we use that science, how does that change your arguments? Isn't the political aspects of technology the real problem here? What kind of "new authority" should emerge as a better alternative?

Expand full comment