26 Comments

Alberto, great post! I coincidentally wrote on very similar lines as yours with my Substack post yesterday: https://trustedtech.substack.com/p/trusted-ai-005-our-average-future

I used Google Translate (and machine translation more broadly) as my example of a technology that was also hyped as world-changing and utopia-bringing and ended up...not.

Expand full comment
May 11, 2023Liked by Alberto Romero

My understanding is that the 1956 conference took it on itself to decide explicitly what the new field was going to be called. The two candidates were "Artificial Intelligence", which Marvin Minsky argued for, and "Automatic Programming". I have no idea who championed that alternative but Minsky won the day and we have dealing the consequences ever since.

Expand full comment
author

I think it was John McCarthy, not Minsky, who came up with the term. First, Claude Shannon persuaded him that it was too "flashy" (if I remember correctly), so they tried something else but didn't work. Finally, they settled with AI because they knew it'd get more funding. So yeah, it was all for the money in some sense (I'd say they were honest about wanting to build agents with human-level intelligence, although they greatly underestimated the quest or overestimated their ability. Or both).

Expand full comment

I pick "underestimated their quest". Remember what they called people who they thought were less intelligent than they were: simple. Given that prejudice it was an easy mistake to make.

Expand full comment
May 11, 2023·edited May 11, 2023Liked by Alberto Romero

Experts strive to reach their audience with their worldviews and information, and in this endeavor, they must navigate a media landscape that rewards soundbites over substance. Moreover, they must also secure a continued presence in mainstream media, a task often contingent on the splash their appearances make rather than the depth of their insights.

How can the best knowledgeable and well-intentioned experts develop nuanced perspectives amidst the attention-grabbing and oversimplification tactics employed by media, algorithms and self-proclaimed and less scrupulous "experts"?

The responsibility rests not only with the media or algorithms, but also significantly with us, as consumers of this information. We must cultivate a demand for more nuanced and in-depth analysis, signaling to these platforms that there is an audience for such content.

The challenge is not simply about modifying the behavior of the experts, media or algorithms but about reshaping the entire ecosystem of information dissemination. This involves fostering a culture that values in-depth analysis over sensationalism and empowers experts to engage directly with the public--as you do here, Alberto.

It's a tall order, but one that could significantly improve our collective understanding of complex and world-bending issues like AI.

Expand full comment
author

Can't agree more, Pascal. Not sure if I'm as optimistic though. I nevertheless expect that AI will slowly get away from the spotlight (not because of a winter, but because we will, hopefully, get used to it) so that these communication and marketing problems disappear, opening the way for more honest science.

Expand full comment
May 11, 2023Liked by Alberto Romero

A factor that I think weighs pretty heavily is that there is a sector of the population that worships -- and I do mean worships -- what they call and see as "intelligence". "Intelligent people" were the people that they respected and deferred to and tried to model themselves after. (Or perhaps what I mean is that they saw the people who got all the goodies as being "intelligent". ) "Intelligence" was the way they ranked humans, very much including themselves. If they had an IQ of 125 they felt superior to people with IQs of 100 and inferior to those who boasted an IQ of 150. People with high IQs were just more likely to be "right" about just about everything than people with lower. A world in which the average IQ was higher than theirs would be a nightmare. They would go through life feeling inferior to everyone. Reading about machines that are "artificially intelligent" triggers all these responses.

Expand full comment
author

Thanks for this take Fred, I will actually touch on this in one of my next articles. Not exactly with the framing you mention, but related to the arrogance of intelligent people (so, instead of the pov of intelligent people feeling inferior to AGI, I'll use the pov of intelligent people in AI being arrogant enough to think they can build and predict this future). I'm still drafting it but I'll probably publish it soon-ish.

Expand full comment
May 11, 2023Liked by Alberto Romero

"inferior" comes close to my meaning but what I had in mind was closer to "anxiety". The people I was thinking of -- and I have worked with them all my life -- get their sense of worth out of being more intelligent than whomever they are talking to -- you can hear this in their voice -- which means they feel inferior in some very deep sense to anyone they meet who seems more "intelligent" than they are. They live with the anxiety of interacting with people like that 24/7. This is the first question in their mind whenever they meet anyone. All these anxieties spill out when the topic of "intelligent" machines come up.

If only the Dartmouth conference had agreed to call their field "automated programming" all this psychic torture would have been avoided.

Expand full comment
author

It's like you read my mind lol. Another draft I have is about the name "AI," using as inspiration and starting point a recent essay by Naomi Klein in The Guardian (that you may have read already) in which she flips in its head the term "hallucination" to criticize the AI industry leaders and their public stances on the present and future of AI. I will argue that "AI" is the original hallucination. The original sin of the field and that it all went down from there. I'm still not sure if I will explore how it all could've been if it was called differently, or maybe analyze how this term influences our beliefs and perceptions... Idk yet, just thinking out loud.

Expand full comment

Emotions have always been the enemy.

Expand full comment
May 10, 2023Liked by Alberto Romero

Excellent piece Alberto!! Will re-read tomorrow. I just came back from a giant big box home reno store with almost no staff except for two security guards to direct you to self-check out and look at your purchased items and receipt. Two things we will live for sure with AI - surveillance capitalism and rapid change driven by the need for ROI on the huge investments AI requires. Those investments will pay off with automation that deskill detask and lead inevitably to job losses on a huge scale. But it won't be utopia or a horror show, that's just a way of framing the debate to drive clicks.

Expand full comment
author

Thanks for your take John. I agree that any AI that's developed today will respond to the needs of capitalism and will be inevitably used in the ways you mention. I don't think anyone can deny this. I just hope we can find a way to do things better (at least better than we did with social media in the last decade)

Expand full comment

Hi again Alberto...

A quick first impression, then I will reread.

I find this part to be, well, apologies, completely wrong...

"Laypeople lack the criteria, knowledge, and background to decide over matters like the present and future of AI. In taking these conversations to the public town square and debates about the questions they pose to social media we are irremediably undermining AI as a scientific endeavor."

Scientists are the least objective observers of how much science we should do, and how fast we should do it. AI industry experts are the least objective observers of where we should go with AI. And neither of these parties has any special expert knowledge of the crucial factor, the human condition.

Scientists and AI engineers are intelligent well educated people whose careers are focused on very narrow highly specialized technical subjects. We should respect them for what they're good at, but not look to them as some kind of clergy who have answers to all the biggest questions, such as what direction this civilization should take.

On questions of that scale, scientists and AI engineers are just like the rest of us, intelligent educated people who are entitled to have and express their opinions. As we listen to their opinions please keep this in mind. The incomes of scientists and AI engineers are dependent upon us doing more science, and more AI.

Should a reader like evidence in support of the above claims, here you go:

The biggest most immediate threat we face today is not AI, but nuclear weapons, a subject overwhelmingly ignored by the intellectual elite class as a group, with only a relatively tiny number of exceptions.

If we as a society, all the way up to the highest levels, don't possess the ability to focus our attention on a known proven threat which could erase the modern world in the next hour, then there's no credible argument for us being ready for yet another significant threat, whatever it's exact nature may turn out to be.

Expand full comment
author

Hey Phil, thanks for this contrarian take. A few things:

"Scientists are the least objective observers of how much science we should do." When you have grounded knowledge about something you can't be objective. Pure objectivity can only occur with absolute ignorance. To me, objectivity in this sense means not much. Scientists may not be objective, but they're the most capable, which is what I care about here.

About "AI industry experts" we agree. I'm definitely not referring to them here. I believe AI should be approached with a multidisciplinary mindset. It can't be done any other way due to the reasons you list.

"On questions of that scale, scientists ... are just like the rest of us." Disagree here. There are debates of opinion and there are debates of fact. In the latter kind, a scientist's take shouldn't be valued similarly to that of a layperson. For instance, I'd rather have expert aerospace engineers build the planes I fly with. I'd rather have a doctor examine my body than a lawyer. I'd rather have a lawyer defend me in court than a doctor. And so on. Some questions, though, aren't a matter of fact. And among those, there are big questions that you may be thinking about here. In those cases, we agree.

"The biggest most immediate threat we face today is not AI, but nuclear weapons, a subject overwhelmingly ignored by the intellectual elite class as a group, with only a relatively tiny number of exceptions." I don't think it's a "tiny number" of scientists who have deeply thought about nuclear weapons. I'd dare to say that this is probably one of the most recurring topics in conversations among scientists of any discipline. The fact that we built these weapons, the US used them, and we, as a society, haven't quite succeeded in completely stopping their production isn't as much a consequence of scientists' carelessness than of geopolitics forces. I agree, as you know, that nuclear weapons are probably more worrisome right now than AI, by far.

Expand full comment

Hi again, I appreciate the good cheer with which you welcome contrarian takes. That's a sign of strength on your part, which I respect.

It's not the scientist's technical knowledge which impairs their objectivity, it's their RELATIONSHIP with knowledge, the "more is better" knowledge philosophy which almost all of them blindly cling to as if it were a religious belief. That "more is better" philosophy can not be rationally defended unless we first assume that human beings are god-like, possessed of unlimited ability.

And of course there is the scientist's income, an obstacle to anyone's objectivity in regards to their own field.

I agree that scientists are typically quite objective about their data, but that is not what's on the table when making judgements about the future of humanity. The question of how much risk we should take on is not a scientific question.

On purely technical questions, scientists are credible authorities, we agree there.

If scientists as a group have thought deeply about nuclear weapons, then why are they working as hard as they can to give humanity ever more, ever larger powers, at an ever accelerating rate? You know, they're still eagerly spending billions of dollars studying even more fundamental particles than the atom, while pushing ahead on every other front as well.

I don't mean to demonize scientists, I agree they are overwhelmingly people of good intentions. I'm just arguing against perceiving them as some kind of all knowing clergy.

Expand full comment
author

I think I understand now better where you come from. "It's not the scientist's technical knowledge which impairs their objectivity, it's their RELATIONSHIP with knowledge, the "more is better" knowledge philosophy." I agree with you here (just the exact quote I pasted here, not the whole sentence), I'm curious to know why you object to this mindset (I don't, FWIW--I guess I identify with that intellectual curiosity, although maybe not above moral considerations, contrary to what Oppenheimer's post-bomb claims suggest).

Expand full comment

Thanks for your interest, and the question Alberto. I hope I'm replying to what you asked.

I'm for intellectual curiosity and learning too. And what I'd like to see us learn in the 21st century is how to take control of the knowledge development process. Like in a car, where you have both the accelerator, and the brakes.

The simplistic "more is better" relationship with knowledge philosophy assumes that human beings can successfully manage any amount of power and change, delivered at any rate.

In the long era of knowledge scarcity "more is better" wasn't a problem, because we were learning quite slowly. In that environment trying to learn faster made sense.

Today we no longer live in a knowledge scarcity environment, but in a radically different new environment, where knowledge is exploding in every direction at an accelerating pace. Our relationship with knowledge needs to adapt to this new environment.

Our knowledge is racing ahead, but our relationship with knowledge is stuck in the past. This is confusing for people, because on one hand the science community looks brilliant, and when it comes to science they are.

But when it comes to our relationship with knowledge, and thus science, the science community is roughly a century behind the times. And for very human reasons, they will be the last people to say that. So somebody else has to.

Expand full comment
May 11, 2023Liked by Alberto Romero

Good points, Phil. I really appreciate your essay, Alberto, especially the nuanced approach you’re taking.

When scientists are the ones making all the decisions, we get people like Hinton who are steering our society based using one part of their brain only. We need all sorts of of “experts” contributing to decision making, even people who are not normally considered experts. Only when there’s real diversity in our leadership can we really do what’s best for everyone.

Expand full comment
author

"Only when there’s real diversity in our leadership can we really do what’s best for everyone." 100% agreed Andrew!

Expand full comment

Alberto writes, "...if we choose well, we can achieve the end of suffering instead of causing the end of the world?"

If our goal is to achieve the end of suffering (or radically reduce it at least) one answer is staring us in the face. The overwhelming majority of violence at every level of society all over the world for thousands of years is committed by men. I would argue that it is this phenomena which poses the most serious threat to AI.

Expand full comment

I've said this before so I'll be brief. The threat from AI may arise less from AI itself, but instead from AI's role as another source of fuel pored on the knowledge explosion. So far at least, I've not seen any discussion of this angle in any AI commentary. It may exist, just haven't found it yet.

Expand full comment
author

Can you define "knowledge explosion"? I've read your comments on this, but maybe it's more specific than what I interpreted. In that case, I could direct you to something about that, if it happens to be something much more concrete than my first impression.

Expand full comment

Sure, please post any links you care to share.

DEFINITION: Knowledge development feeds back upon itself resulting in an accelerating pace of new knowledge development. You know, once we invent computers, the Net, and now AI, we can use such tools to learn faster than previously.

I explored why I see this as a problem on the following pages. Apologies to readers who have already seen these links posted here.

https://www.tannytalk.com/p/knowledge-knowledge-and-wisdom

https://www.tannytalk.com/p/our-relationship-with-knowledge

These articles are not about AI specifically, but more about the larger context within which AI resides.

Expand full comment
author

I think I agree with you partially on this idea (just reread the second post in depth, which I already skimmed when you first posted it here). But I don't think the real problem here is knowledge per se, but things that underlie it and push it into directions we should avoid, like making nuclear weapons, genetic engineering, or superintelligent AIs. Those things are, for example, capitalism, us vs them conflicts between countries, competition instead of cooperation at many levels, etc.

I'd say that if we were able to eliminate our hunger to know more, we'd also be able to eliminate those other things. And I think it'd be preferable to do the latter and keep our desire to know more intact. In the end, knowledge has brought us a lot of important things that improve our collective and individual well-being, like medicine and psychotherapy, art and music, or bicycles and refrigerators.

Expand full comment

I'm not arguing against learning, I'm arguing FOR learning something new. It's the science community (and the larger culture) that wants to keep on thinking the way they always have, and doing the same things they've always done.

Example: The science community wants to build a car that has only a gas pedal. I want to build a car that has both a gas pedal AND brakes. Having a car with brakes would be progress, not a retreat.

As to your first points about things that underlie knowledge etc, you might be interested in the the well done documentary Chimp Empire on Netflix. The story closely follows a community of about 100 chimps in a Ugandan forest.

https://www.netflix.com/title/81311783

The main take away for me was that many of the things that define the human condition today go back millions of years before we were human. As example, us vs. them conflicts between tribes, and political squabbling within the tribe. Point being, there's only so much we're going to be able to do about the kind of creatures we are.

Expand full comment