I used Google Translate (and machine translation more broadly) as my example of a technology that was also hyped as world-changing and utopia-bringing and ended up...not.
My understanding is that the 1956 conference took it on itself to decide explicitly what the new field was going to be called. The two candidates were "Artificial Intelligence", which Marvin Minsky argued for, and "Automatic Programming". I have no idea who championed that alternative but Minsky won the day and we have dealing the consequences ever since.
I think it was John McCarthy, not Minsky, who came up with the term. First, Claude Shannon persuaded him that it was too "flashy" (if I remember correctly), so they tried something else but didn't work. Finally, they settled with AI because they knew it'd get more funding. So yeah, it was all for the money in some sense (I'd say they were honest about wanting to build agents with human-level intelligence, although they greatly underestimated the quest or overestimated their ability. Or both).
I pick "underestimated their quest". Remember what they called people who they thought were less intelligent than they were: simple. Given that prejudice it was an easy mistake to make.
Experts strive to reach their audience with their worldviews and information, and in this endeavor, they must navigate a media landscape that rewards soundbites over substance. Moreover, they must also secure a continued presence in mainstream media, a task often contingent on the splash their appearances make rather than the depth of their insights.
How can the best knowledgeable and well-intentioned experts develop nuanced perspectives amidst the attention-grabbing and oversimplification tactics employed by media, algorithms and self-proclaimed and less scrupulous "experts"?
The responsibility rests not only with the media or algorithms, but also significantly with us, as consumers of this information. We must cultivate a demand for more nuanced and in-depth analysis, signaling to these platforms that there is an audience for such content.
The challenge is not simply about modifying the behavior of the experts, media or algorithms but about reshaping the entire ecosystem of information dissemination. This involves fostering a culture that values in-depth analysis over sensationalism and empowers experts to engage directly with the public--as you do here, Alberto.
It's a tall order, but one that could significantly improve our collective understanding of complex and world-bending issues like AI.
Can't agree more, Pascal. Not sure if I'm as optimistic though. I nevertheless expect that AI will slowly get away from the spotlight (not because of a winter, but because we will, hopefully, get used to it) so that these communication and marketing problems disappear, opening the way for more honest science.
A factor that I think weighs pretty heavily is that there is a sector of the population that worships -- and I do mean worships -- what they call and see as "intelligence". "Intelligent people" were the people that they respected and deferred to and tried to model themselves after. (Or perhaps what I mean is that they saw the people who got all the goodies as being "intelligent". ) "Intelligence" was the way they ranked humans, very much including themselves. If they had an IQ of 125 they felt superior to people with IQs of 100 and inferior to those who boasted an IQ of 150. People with high IQs were just more likely to be "right" about just about everything than people with lower. A world in which the average IQ was higher than theirs would be a nightmare. They would go through life feeling inferior to everyone. Reading about machines that are "artificially intelligent" triggers all these responses.
Thanks for this take Fred, I will actually touch on this in one of my next articles. Not exactly with the framing you mention, but related to the arrogance of intelligent people (so, instead of the pov of intelligent people feeling inferior to AGI, I'll use the pov of intelligent people in AI being arrogant enough to think they can build and predict this future). I'm still drafting it but I'll probably publish it soon-ish.
"inferior" comes close to my meaning but what I had in mind was closer to "anxiety". The people I was thinking of -- and I have worked with them all my life -- get their sense of worth out of being more intelligent than whomever they are talking to -- you can hear this in their voice -- which means they feel inferior in some very deep sense to anyone they meet who seems more "intelligent" than they are. They live with the anxiety of interacting with people like that 24/7. This is the first question in their mind whenever they meet anyone. All these anxieties spill out when the topic of "intelligent" machines come up.
If only the Dartmouth conference had agreed to call their field "automated programming" all this psychic torture would have been avoided.
It's like you read my mind lol. Another draft I have is about the name "AI," using as inspiration and starting point a recent essay by Naomi Klein in The Guardian (that you may have read already) in which she flips in its head the term "hallucination" to criticize the AI industry leaders and their public stances on the present and future of AI. I will argue that "AI" is the original hallucination. The original sin of the field and that it all went down from there. I'm still not sure if I will explore how it all could've been if it was called differently, or maybe analyze how this term influences our beliefs and perceptions... Idk yet, just thinking out loud.
Excellent piece Alberto!! Will re-read tomorrow. I just came back from a giant big box home reno store with almost no staff except for two security guards to direct you to self-check out and look at your purchased items and receipt. Two things we will live for sure with AI - surveillance capitalism and rapid change driven by the need for ROI on the huge investments AI requires. Those investments will pay off with automation that deskill detask and lead inevitably to job losses on a huge scale. But it won't be utopia or a horror show, that's just a way of framing the debate to drive clicks.
Thanks for your take John. I agree that any AI that's developed today will respond to the needs of capitalism and will be inevitably used in the ways you mention. I don't think anyone can deny this. I just hope we can find a way to do things better (at least better than we did with social media in the last decade)
Can you define "knowledge explosion"? I've read your comments on this, but maybe it's more specific than what I interpreted. In that case, I could direct you to something about that, if it happens to be something much more concrete than my first impression.
I think I agree with you partially on this idea (just reread the second post in depth, which I already skimmed when you first posted it here). But I don't think the real problem here is knowledge per se, but things that underlie it and push it into directions we should avoid, like making nuclear weapons, genetic engineering, or superintelligent AIs. Those things are, for example, capitalism, us vs them conflicts between countries, competition instead of cooperation at many levels, etc.
I'd say that if we were able to eliminate our hunger to know more, we'd also be able to eliminate those other things. And I think it'd be preferable to do the latter and keep our desire to know more intact. In the end, knowledge has brought us a lot of important things that improve our collective and individual well-being, like medicine and psychotherapy, art and music, or bicycles and refrigerators.
Hey Phil, thanks for this contrarian take. A few things:
"Scientists are the least objective observers of how much science we should do." When you have grounded knowledge about something you can't be objective. Pure objectivity can only occur with absolute ignorance. To me, objectivity in this sense means not much. Scientists may not be objective, but they're the most capable, which is what I care about here.
About "AI industry experts" we agree. I'm definitely not referring to them here. I believe AI should be approached with a multidisciplinary mindset. It can't be done any other way due to the reasons you list.
"On questions of that scale, scientists ... are just like the rest of us." Disagree here. There are debates of opinion and there are debates of fact. In the latter kind, a scientist's take shouldn't be valued similarly to that of a layperson. For instance, I'd rather have expert aerospace engineers build the planes I fly with. I'd rather have a doctor examine my body than a lawyer. I'd rather have a lawyer defend me in court than a doctor. And so on. Some questions, though, aren't a matter of fact. And among those, there are big questions that you may be thinking about here. In those cases, we agree.
"The biggest most immediate threat we face today is not AI, but nuclear weapons, a subject overwhelmingly ignored by the intellectual elite class as a group, with only a relatively tiny number of exceptions." I don't think it's a "tiny number" of scientists who have deeply thought about nuclear weapons. I'd dare to say that this is probably one of the most recurring topics in conversations among scientists of any discipline. The fact that we built these weapons, the US used them, and we, as a society, haven't quite succeeded in completely stopping their production isn't as much a consequence of scientists' carelessness than of geopolitics forces. I agree, as you know, that nuclear weapons are probably more worrisome right now than AI, by far.
I think I understand now better where you come from. "It's not the scientist's technical knowledge which impairs their objectivity, it's their RELATIONSHIP with knowledge, the "more is better" knowledge philosophy." I agree with you here (just the exact quote I pasted here, not the whole sentence), I'm curious to know why you object to this mindset (I don't, FWIW--I guess I identify with that intellectual curiosity, although maybe not above moral considerations, contrary to what Oppenheimer's post-bomb claims suggest).
Good points, Phil. I really appreciate your essay, Alberto, especially the nuanced approach you’re taking.
When scientists are the ones making all the decisions, we get people like Hinton who are steering our society based using one part of their brain only. We need all sorts of of “experts” contributing to decision making, even people who are not normally considered experts. Only when there’s real diversity in our leadership can we really do what’s best for everyone.
Alberto, great post! I coincidentally wrote on very similar lines as yours with my Substack post yesterday: https://trustedtech.substack.com/p/trusted-ai-005-our-average-future
I used Google Translate (and machine translation more broadly) as my example of a technology that was also hyped as world-changing and utopia-bringing and ended up...not.
My understanding is that the 1956 conference took it on itself to decide explicitly what the new field was going to be called. The two candidates were "Artificial Intelligence", which Marvin Minsky argued for, and "Automatic Programming". I have no idea who championed that alternative but Minsky won the day and we have dealing the consequences ever since.
I think it was John McCarthy, not Minsky, who came up with the term. First, Claude Shannon persuaded him that it was too "flashy" (if I remember correctly), so they tried something else but didn't work. Finally, they settled with AI because they knew it'd get more funding. So yeah, it was all for the money in some sense (I'd say they were honest about wanting to build agents with human-level intelligence, although they greatly underestimated the quest or overestimated their ability. Or both).
I pick "underestimated their quest". Remember what they called people who they thought were less intelligent than they were: simple. Given that prejudice it was an easy mistake to make.
Experts strive to reach their audience with their worldviews and information, and in this endeavor, they must navigate a media landscape that rewards soundbites over substance. Moreover, they must also secure a continued presence in mainstream media, a task often contingent on the splash their appearances make rather than the depth of their insights.
How can the best knowledgeable and well-intentioned experts develop nuanced perspectives amidst the attention-grabbing and oversimplification tactics employed by media, algorithms and self-proclaimed and less scrupulous "experts"?
The responsibility rests not only with the media or algorithms, but also significantly with us, as consumers of this information. We must cultivate a demand for more nuanced and in-depth analysis, signaling to these platforms that there is an audience for such content.
The challenge is not simply about modifying the behavior of the experts, media or algorithms but about reshaping the entire ecosystem of information dissemination. This involves fostering a culture that values in-depth analysis over sensationalism and empowers experts to engage directly with the public--as you do here, Alberto.
It's a tall order, but one that could significantly improve our collective understanding of complex and world-bending issues like AI.
Can't agree more, Pascal. Not sure if I'm as optimistic though. I nevertheless expect that AI will slowly get away from the spotlight (not because of a winter, but because we will, hopefully, get used to it) so that these communication and marketing problems disappear, opening the way for more honest science.
A factor that I think weighs pretty heavily is that there is a sector of the population that worships -- and I do mean worships -- what they call and see as "intelligence". "Intelligent people" were the people that they respected and deferred to and tried to model themselves after. (Or perhaps what I mean is that they saw the people who got all the goodies as being "intelligent". ) "Intelligence" was the way they ranked humans, very much including themselves. If they had an IQ of 125 they felt superior to people with IQs of 100 and inferior to those who boasted an IQ of 150. People with high IQs were just more likely to be "right" about just about everything than people with lower. A world in which the average IQ was higher than theirs would be a nightmare. They would go through life feeling inferior to everyone. Reading about machines that are "artificially intelligent" triggers all these responses.
Thanks for this take Fred, I will actually touch on this in one of my next articles. Not exactly with the framing you mention, but related to the arrogance of intelligent people (so, instead of the pov of intelligent people feeling inferior to AGI, I'll use the pov of intelligent people in AI being arrogant enough to think they can build and predict this future). I'm still drafting it but I'll probably publish it soon-ish.
"inferior" comes close to my meaning but what I had in mind was closer to "anxiety". The people I was thinking of -- and I have worked with them all my life -- get their sense of worth out of being more intelligent than whomever they are talking to -- you can hear this in their voice -- which means they feel inferior in some very deep sense to anyone they meet who seems more "intelligent" than they are. They live with the anxiety of interacting with people like that 24/7. This is the first question in their mind whenever they meet anyone. All these anxieties spill out when the topic of "intelligent" machines come up.
If only the Dartmouth conference had agreed to call their field "automated programming" all this psychic torture would have been avoided.
It's like you read my mind lol. Another draft I have is about the name "AI," using as inspiration and starting point a recent essay by Naomi Klein in The Guardian (that you may have read already) in which she flips in its head the term "hallucination" to criticize the AI industry leaders and their public stances on the present and future of AI. I will argue that "AI" is the original hallucination. The original sin of the field and that it all went down from there. I'm still not sure if I will explore how it all could've been if it was called differently, or maybe analyze how this term influences our beliefs and perceptions... Idk yet, just thinking out loud.
Emotions have always been the enemy.
Excellent piece Alberto!! Will re-read tomorrow. I just came back from a giant big box home reno store with almost no staff except for two security guards to direct you to self-check out and look at your purchased items and receipt. Two things we will live for sure with AI - surveillance capitalism and rapid change driven by the need for ROI on the huge investments AI requires. Those investments will pay off with automation that deskill detask and lead inevitably to job losses on a huge scale. But it won't be utopia or a horror show, that's just a way of framing the debate to drive clicks.
Thanks for your take John. I agree that any AI that's developed today will respond to the needs of capitalism and will be inevitably used in the ways you mention. I don't think anyone can deny this. I just hope we can find a way to do things better (at least better than we did with social media in the last decade)
Can you define "knowledge explosion"? I've read your comments on this, but maybe it's more specific than what I interpreted. In that case, I could direct you to something about that, if it happens to be something much more concrete than my first impression.
I think I agree with you partially on this idea (just reread the second post in depth, which I already skimmed when you first posted it here). But I don't think the real problem here is knowledge per se, but things that underlie it and push it into directions we should avoid, like making nuclear weapons, genetic engineering, or superintelligent AIs. Those things are, for example, capitalism, us vs them conflicts between countries, competition instead of cooperation at many levels, etc.
I'd say that if we were able to eliminate our hunger to know more, we'd also be able to eliminate those other things. And I think it'd be preferable to do the latter and keep our desire to know more intact. In the end, knowledge has brought us a lot of important things that improve our collective and individual well-being, like medicine and psychotherapy, art and music, or bicycles and refrigerators.
Hey Phil, thanks for this contrarian take. A few things:
"Scientists are the least objective observers of how much science we should do." When you have grounded knowledge about something you can't be objective. Pure objectivity can only occur with absolute ignorance. To me, objectivity in this sense means not much. Scientists may not be objective, but they're the most capable, which is what I care about here.
About "AI industry experts" we agree. I'm definitely not referring to them here. I believe AI should be approached with a multidisciplinary mindset. It can't be done any other way due to the reasons you list.
"On questions of that scale, scientists ... are just like the rest of us." Disagree here. There are debates of opinion and there are debates of fact. In the latter kind, a scientist's take shouldn't be valued similarly to that of a layperson. For instance, I'd rather have expert aerospace engineers build the planes I fly with. I'd rather have a doctor examine my body than a lawyer. I'd rather have a lawyer defend me in court than a doctor. And so on. Some questions, though, aren't a matter of fact. And among those, there are big questions that you may be thinking about here. In those cases, we agree.
"The biggest most immediate threat we face today is not AI, but nuclear weapons, a subject overwhelmingly ignored by the intellectual elite class as a group, with only a relatively tiny number of exceptions." I don't think it's a "tiny number" of scientists who have deeply thought about nuclear weapons. I'd dare to say that this is probably one of the most recurring topics in conversations among scientists of any discipline. The fact that we built these weapons, the US used them, and we, as a society, haven't quite succeeded in completely stopping their production isn't as much a consequence of scientists' carelessness than of geopolitics forces. I agree, as you know, that nuclear weapons are probably more worrisome right now than AI, by far.
I think I understand now better where you come from. "It's not the scientist's technical knowledge which impairs their objectivity, it's their RELATIONSHIP with knowledge, the "more is better" knowledge philosophy." I agree with you here (just the exact quote I pasted here, not the whole sentence), I'm curious to know why you object to this mindset (I don't, FWIW--I guess I identify with that intellectual curiosity, although maybe not above moral considerations, contrary to what Oppenheimer's post-bomb claims suggest).
Good points, Phil. I really appreciate your essay, Alberto, especially the nuanced approach you’re taking.
When scientists are the ones making all the decisions, we get people like Hinton who are steering our society based using one part of their brain only. We need all sorts of of “experts” contributing to decision making, even people who are not normally considered experts. Only when there’s real diversity in our leadership can we really do what’s best for everyone.
"Only when there’s real diversity in our leadership can we really do what’s best for everyone." 100% agreed Andrew!