31 Comments
May 20, 2023Liked by Alberto Romero

Thanks for the great essay. I would respectfully suggest that what you're feeling is the anxiety of the liberal humanist, the center-left intellectual who fundamentally agrees with the emancipatory goals of the far left, but regards their methods as counterproductive. As someone who got his doctoral dissertation on Rawls 25 years ago, and was sneered out of academia as a tool of the patriarchy and insufficiently radical, I share you concerns.

What I most admire about your piece is your refusal to succumb to contempt. This is the fatal flaw of the radical critique. Theorists of the far left construct a Manichean world where there are only two groups of people, and its their job to sort them (workers vs parasites, patriarchy vs feminists, woke vs. benighted). It is a politics fundamentally fueled by contempt. It is why following the MLK Jr or Camus playbook for liberalism is so difficult; it constructs a world of reasonable pluralism and requires you to view with respect those you profoundly disagree with.

You see the same familiar brush strokes across the blank canvas of every wave of technological innovation: VR and metaverse, crypto and NFTs, and now AI and LLMs. The same familiar heroes and villains, the same I-speak-for-the-voiceless rhetoric, the same snide dismissals. Unfortunately, and as you correctly point out, those obligatory openings moves against generative AI smack of intellectual dishonesty. Lum's tweets acknowledge that. I admire you defending your nuanced position; your fundamental sympathy for the project as a whole but your rejection of a politics that lacks the conceptual tools to truly grapple with what's going on. Keep up the great work.

Expand full comment

Did a test a few days ago myself with Bard and GPT and a well-respected translation of the Bible. I asked the LLM questions about Bible quotes and it got them wrong. Both systems. It’s like they’re getting worse. And that should be the most basic text to have stored in the system.

I don’t get it.

Expand full comment
author

I don't think they're getting worse, but they're not designed to do well on those kinds of tasks. One example that's gone viral recently: a professor asked ChatGPT if some text he copy-pasted in the prompt had been written by it (his students' final essay, which was definitive for them to be able to graduate). ChatGPT invariably said yes and it's been later proved that it's false.

However, saying they're "autocomplete systems" or "stochastic parrots" is intended to minimize the things they can do well top highlights their limitations. In the right context, it's fine to use these metaphors, but not if their virtues are never acknowledged elsewhere.

Expand full comment

I find their best use is to reformat things. For example, I recently used it to reformat a line numbered screenplay I’d coauthored into a short story and to remove all the numbers. It did it in a flash and it would have taken me at least a tedious hour.

Expand full comment
author

That's one of the best uses I see, too. Because first, you're by definition an expert on that in case the reformatting is about your work. And second, it's very easy to catch a mistake in case the system makes one (and it's probably less likely to make it in the first place).

Expand full comment

Following these debates on Twitter and Mastodon, I think there are really a couple of key voices to which this critique especially applies. And I share your dismay. I too agree with their important interventions, but also agree that the tone and focus of their discourse over the last few months has made me want to back away slowly. And that’s a huge loss because their approach has divided the community, arguably more over tone than substance. To be fair, I can understand how frustrating it must be to hear Google’s Pichai talk about the need for AI ethicists after one has been fired by Google as an AI ethicist, or hearing “stochastic parrot” attributed to Sam Altman instead of oneself. Agreed that Mitchell is a good exception.

Expand full comment
author

Indeed, I was thinking about a few highly visible people that are perceived as the leaders of the AI ethics group (for good reason). "Their approach has divided the community, arguably more over tone than substance," this is exactly right. Many people disagree with the substance, but some are willing to engage in conversation. When their tone doesn't allow that, they simply get away. I'm sad about how this is distancing people who would otherwise hear attentively what they have to say. I'm sure frustration and tiredness are central to this behavior, though.

Expand full comment
May 18, 2023Liked by Alberto Romero

Fantastic read! Being interested enough in AI, I agree with something you wrote in the intro to this series: "Friends and family know very little about AI and how it influences our daily lives."

I wonder how topics around AI ethics and its impact on society can reach family dinner conversations with the same ease and fluency with which I can share fun, daily ChatGPT examples (birthday poem, email to customer support, a trip itinerary, etc.)

Southpark had a fun episode about ChatGPT [1], bringing it to life with school and dating topics. And I imagine many people saw the Pope at Burning Man [2]. But we would need many more everyday examples pointing to possible risks+challenges to come close to the incredible benefits that are, indeed, "obvious to anyone who has signed up for a ChatGPT account."

Eh, without claiming to be an AI ethicist, here's one example: an AI-powered advice column that tries to humanize a few examples of potential AI impact on our daily lives - from AI bias in recruiting to secret affairs with ChatGPT: https://dearai.substack.com/p/ai-powered-relationship-advice-for-the-ai-age

Thank you for this article and this series!

[1] https://southpark.cc.com/episodes/8byci4/south-park-deep-learning-season-26-ep-4

[2] https://www.nytimes.com/2023/04/08/technology/ai-photos-pope-francis.html

Expand full comment
author

"I wonder how topics around AI ethics and its impact on society can reach family dinner conversations with the same ease and fluency with which I can share fun, daily ChatGPT examples" Very good question Itai, I don't think it's easy but worth trying for sure!

Expand full comment

This expresses just how I’ve been feeling about their arguments. I want to support AI ethics but the constant criticism of generative AI... it’s just as you say. Thanks for writing this.

Expand full comment
author

Thank you Nicole, let's hope this situation doesn't degrade further!

Expand full comment
May 17, 2023Liked by Alberto Romero

This is spot on, very well argued, 100% on the money!!!

Expand full comment
author

Thanks Mike!!

Expand full comment
May 18, 2023·edited May 18, 2023

AI is becoming as divisive as American politics, the problem being that the lunatic fringes tend to get all the attention. Also, I think some long time researchers just feel threatened by the AI nouveau riche.

Expand full comment

As usual, I will argue that we leap right over all these AI specific debates to focus on a wider view.

Pretend for a moment that we decisively resolved every single concern about AI. That doesn't matter. The knowledge explosion will keep rolling along, faster and faster, producing ever more powerful forces at an accelerating rate. AI is not the end of the 21st century, it's the beginning.

Forget about all the details of the present moment. Clear your mind. Sweep all that off the table. Focus on the big picture bottom line.

1) IF human beings are of limited ability (like every other creature on the planet)....

2) THEN a process which has the goal of developing ever more, ever greater powers, at an ever accelerating rate will inevitably exceed those limits sooner or later.

Don't focus on the particular products rolling off the end of the knowledge explosion assembly line.

Focus on the assembly line itself. If we don't learn how to take control of the assembly line, it will inevitably produce forces that we can't manage.

Would you buy a car that only had a gas pedal, but no brakes?

Expand full comment
author

I understand your focus Phil and agree that depending on the perspective we take some topics are more important than others, but it's worth writing (and reading) about other aspects of AI and technology in general. You'd be happy to know that it seems AI people like Altman are very willing to draw a parallelism between the current state of this tech and nuclear weapons in the mid-20th century.

Expand full comment

Hi Alberto, good day to you sir. I don't object to discussion of AI. Here I am. A paying subscriber. Taking it all in, becoming educated at your hands. Thanks for that.

I'm attempting to share that concerns about AI safety and ethics etc may be largely meaningless if we don't also zoom out to the bigger picture of the knowledge explosion as a whole. What difference will it really make if we fix every problem presented by AI if we then go on to game over chaos by some other method?

What has Altman learned from nuclear weapons? Build as many as possible as quickly as possible?

Ok, he's asking for government regulation, point taken. He wants to know the rules so he doesn't get in trouble. But isn't this sort of like establishing rules that we hope will ensure that nuclear weapons are mass produced in a manner that won't hurt the workers in that industry?

Expand full comment
author

And I thank you, Phil, for engaging with me and sharing your insightful arguments!

I agree that Altman talking about regulation isn't really what it appears to be. He wants to regulate the space now that he's already exploited the benefits of no regulation.

The thing is: what do you propose so that we stop this knowledge explosion? Don't you think that's inherent to what humans are? Because I actually agree with you that many of the things we might be able to know and discover and build aren't really helpful or good but could indeed get us closer to extinction (not necessarily AI-driven extinction).

What I don't see is how to change that or if it's possible even. AI ethics and AI safety (with barely any overlapping between them) both take as a premise that humanity goes forward and that direction entails knowing more.

I'm open and very interested in hearing arguments about this.

Expand full comment

Thank you as always for being an interesting conversation partner Alberto. Having attempted such discussion many times in many places, I have a sincere appreciation for what you can contribute.

We agree, humans are about learning. I am proposing learning! I am proposing that we learn how to control the pace and direction of the knowledge explosion. Not stop the car, learn how to drive the car, instead of it driving us.

Yes, this is a very big challenge, agreed. But so is AI development, exploring space, and a million other things which we approach with great confidence and ability.

Imho, the reason we aren't currently taking on this challenge to the degree necessary is that we still think we can get by without doing so. We think taking control of the knowledge explosion is optional. When we conclude that meeting this challenge is not optional, and recognize that we have our backs against the wall, human beings then become very resourceful.

Various factors obstructing our understanding may be...

1) The knowledge explosion has produced many miracles, which naturally we want more of. So when somebody starts talking about changing the routine, we become suspicious. Don't mess with our goodies!

2) We look to the science community as being experts about knowledge, and fail to recognize that they are in the least objective position regarding the question of how much science we should be doing. Authority worship.

3) All of the above is pretty abstract and nerdy, and that's not the channel most people are on.

4) We're very distracted by a million other things being relentlessly pushed in to our faces every day by modern media.

It's not unreasonable to propose that learning how to take control of the knowledge explosion is beyond our ability. That may very well prove to be true. Of course I don't know. But we're human beings. We're not supposed to just lie down and die, right?

What can we do? We can talk about this. We can engage as many minds as we can on such questions. We can have a little more faith in our ability to take control of our destiny. We can challenge the authorities and try to determine whether they know what they're talking about.

And if all that fails, we can be a little better prepared for the pain that is coming.

Expand full comment

Think back a century to 1923. Think about everything that happened in the 20th century that almost nobody would have predicted or even imagined in 1923.

That's where we are today in the 21st century too. 2023. And no clue what's coming next. AI may be the least of our worries.

Expand full comment
author

If we can't predict it, how can we talk about it? The things we talk about should be connected to reality somehow. If the assumptions that connect something like your "knowledge explosion" to our current reality aren't well described, it's hard to convince policymakers that it's worth thinking about it.

I think this is one of the problems with the arguments of existential risk in general.

Do you have a solution for this?

Expand full comment

What I imagine is a process similar to what happened 500 years ago with the Enlightenment.

500 years ago or so various thinkers began to challenge the leading cultural authority of that time, the Church. These thinkers sought to replace authority worship and blind faith dogmas with reason.

It's time to do that again, a new Enlightenment.

In our time, the leading cultural authority is the science community. Their blind faith dogma is the "more is better" relationship with knowledge. We accept that dogma on faith, because we are worshiping their authority rather than thinking for ourselves. You know, the science community are the "experts", so we assume they must be right. We can't imagine that they could all be wrong. We don't want to believe that could be possible, because that's quite unsettling.

Religion didn't go away 500 years ago, but our relationship with religion has matured. Science won't go away after today's Enlightenment process. But our relationship with science needs to mature. "More is better" is a simplistic immature concept which needs to grow up.

Expand full comment
author

Comparing science and religion is quite a stretch (I understand the aspects in which you're comparing them, though). Yet, if the scientific community is an authority right now, is for good reason compared to why the different religions were before. The Enlightenment wasn't perfect but definitely better than what they had before in basically any sense imaginable.

Science isn't perfect by any means, but I think the problem you're highlighting isn't science per se. It's how we use the knowledge that science provides in the broader context of geopolitical pressures and technological advances that may allow a country to obtain an edge over its adversaries. Knowledge about nuclear fission and fusion allows for many things beyond atomic weapons (including, potentially, fusion energy plants). Knowledge about genetics, DNA, etc. allows for things other than genetic editing. The same goes for the science behind modern AI (e.g. the cognitive sciences give us a lot of knowledge about our brains, not just some vague indications for how to replicate it with statistical pattern matching).

If we imagine a world where science is disconnected from how we use that science, how does that change your arguments? Isn't the political aspects of technology the real problem here? What kind of "new authority" should emerge as a better alternative?

Expand full comment

Yes, the problem I'm pointing to is not the scientific method. That works as intended. The problem is our relationship with science.

The points you raise about religion vs. science are very interesting to me. Very. But out of respect to you, I'm wary of going any farther off the topic of AI than I already am. Should you wish to pursue that discussion further either here or elsewhere though, I'm in. You lead, I'll follow.

What kind of new authority could manage the knowledge explosion? I don't know. If there is an answer to that, I expect it will likely emerge out of some crisis. We don't really want an answer to that question currently, and so are unlikely to find one.

One factor I've written quite a bit about is the threat presented by the marriage between an accelerating knowledge explosion and violent men. This is only one factor, but a pretty big one. For now I'll just say that marriage is unsustainable, and direct readers who are interested to my blog for more.

Thank you as always for accommodating my obsessions. I hope such discussion between us will prove interesting to at least some of your readers.

Expand full comment

We can't predict the details of what is coming, agreed. But we can use common sense to examine where the process as a whole is headed, which is what I'm trying to do.

1) Knowledge development feeds back upon itself, resulting in an accelerating development of new knowledge.

2) Human ability is limited.

3) So if we continue to develop ever more, ever larger powers, at an increasing pace....

4) Sooner or later we will exceed the limits of our ability to manage such powers.

We don't know exactly what the knowledge explosion will produce, nor exactly what the limits of human ability are. So we can't credibly predict what will happen when. But we can see in general terms where the process as a whole is headed.

Perhaps we could use the concept of the Peter Principle as a shorthand way to refer to this calculation? We keep getting promoted to a higher level "job". Sooner or later we will find ourselves in a position which we aren't qualified for.

Will policy makers think about any of this? Best guess, we probably won't see serious discussion of such a larger perspective until after some kind of dramatic crisis which is large enough to shake the foundations of today's status quo.

Pain is our mostly likely teacher. But it doesn't hurt to throw a hail mary pass and hope that maybe we get lucky and reason just might work.

Expand full comment

Currently the swing is in the other direction: so many specialists claim that AI is life threatening and will come to haunt us. A lone voice of reason is LeCun. Most alarmists must be aware their claims are off he wall based on current progress. Paraphrasing a recent claim: "there is a chance AI will annihilate us, and the chance is close to zero". This vacuous claim made the news channels gobble up the first part. It makes me wonder whether the alarmist claims serve to detract from the criticism (which also swung too far as discussed in this article). If AI is seen as life threatening then this quiets criticism. It makes AI seem real and fully in place (currently it captures a slice of intelligence, detecting patterns). A position of power is created as those who claim to see massive danger ahead are the most likely port of call for people looking to do something about it. It is a great play if viewed as chess, but it hollows out trust in science. The seesaw between both parties is tiresome and it is sad to see it occur in a field I love. I respect your level headed contributions.

Expand full comment

Hey Alberto, hope you are doing great!

I am Soumya from ByteBrief (bytebrief.co)

We love your newsletter and we have a good news for you!

We're also running a beehiiv newsletter based on AI with 19K+ readers, and we are inviting writers to share their best knowledge on any topic that you love on our next issue, completely dedicated for you!

If it is tech/AI, our audience will be more than happy to read your work on our newsletter.

We can explore cross-promotion opportunities, if you're interested?

Or if you have any other proposal, we can proceed with it too.

Contact us: hello@bytebrief.co

Thanks

Expand full comment

Muchas gracias por este boletín y los anteriores, información que vale oro para los que no conocemos mucho de este tema de la IA. Con tus boletines empiezo a comprender las ventajas y desventajas de la IA y los riesgos que podemos alcanzar en la humanidad si no le ponemos responsabilidad. Gracias

Expand full comment

The over-hyped state of the field is distasteful as is is the over-criticism. The beauty of the contribution does get lost. I agree that this is a shame. Nice you point out who is more level headed amidst all this. I look forward to read more by M. Mitchell.

Expand full comment

U would have A for this article without the need to specify that U are a:

« white male ».

Expand full comment

Could part of the issue be that those in the technical class are so focused on a singular set of current beliefs around a specific definition of race being at the center of every ethical harm that we’ve rendered ourselves unable to deal with any broader conditions that may be causing the negative outcomes we’re trying to solve for?

Data driven AI ultimately has no interest in our biases and beliefs of the current moment. And if you ask it the right questions it’s perfectly happy to reflect answers back to us that reveal fundamental truths about the deeper limits and flaws of humanity that may be too unpleasant or inconvenient to accept.

But until we accept them we can never begin to come up with a genuine way to move past them. Instead we’ll try and (and fail) to hobble AI so it can no longer reveal them.

Expand full comment