What I like about your article is how it is expanding our focus beyond AI itself to the properties of the human culture which is developing and using AI.
It seems difficult to understand and predict the future of AI from a purely technical perspective. I'm not sure even the experts can credibly do that.
So, for we in the broad public especially, it seems important that we not know only "what's really going on in AI", but develop our understanding of what is really going on with human beings. Human beings have not substantially changed in thousands of years, and so known facts about how we have interacted with new technology in the past can offer us useful insights in to where we may take AI.
Seventy years ago we developed the first existential scale technology, nuclear weapons. After a lifetime of their existence we still don't have the slightest clue how to make ourselves safe from this technology. The more troubling fact is that we've come pretty close to giving up trying. Even the brightest best educated minds among us typically don't find these civilization ending weapons interesting enough to discuss.
This is the culture which is now creating new existential scale powers such as AI and genetic engineering, with more and even larger powers likely to emerge from the knowledge explosion at an ever accelerating rate, thanks in part to AI.
The primary distorting filter I see in understanding AI is an unwillingness to be objective and honest in examining ourselves. What we need more than a better understanding of AI technology is a mirror.
"The primary distorting filter I see in understanding AI is an unwillingness to be objective and honest in examining ourselves. What we need more than a better understanding of AI technology is a mirror."
I agree with you here in that this is critical for us, but I don't think that's a filter for understanding AI. It's more related to the reasons why we're building AI in the first place--which is a different but equally important topic.
Thanks, point taken. I agree, our lack of understanding of ourselves is not a limiting factor in understanding the technical properties of AI. I should have said "understanding our relationship with AI".
On a related matter: it is not at all obvious (to me) what progress in Generative AI (or whatever you want to call it), is going to look like over the next ten years. The texts that are being generated right now are interesting because they are unexpectedly coherent. But they are not particularly insightful. They are not intelligent in the way that the people you and I think of as being especially bright are. So one direction of research might be to attack this problem: to raise the IQ of text generators. But from where I am sitting that does not look like a well-posed problem. So perhaps progress will run off in another direction altogether.
ChatGPT seems to be quite an upgrade on this. It's apparently more coherent (it still makes up things, as it always happens). I'd like to read the paper
Very interesting essay. However, 4 filters seem to me a little strong generalization to an ample variety of psychological phenomena that arrives when reading and getting knowledge, which includes perception, cognitive biases, and cognitive distortions that we all human beings share at some extent. On the other hand, the “bubble effect” produces a critical effect on what is the information and knowledge at our disposal that we read in the media, creating isolation and redundant feedback on what the algorithms show us to read, and in turn what is the knowledge we get. Anyway, I found very interesting your article that provoke our awareness and reaction to the knowledge we read on the media.
Agreed Ricardo, I mention that at the end. This article is limited because listing everything that influences how we get knowledge would take a book (or a series of books). Still, it's a short manual that covers the main ones.
Here's a prediction you won't see every day -- AI is going to disappear in a decade or two. The reason is that AI as the term is used today is almost entirely aspirational. As it spins out real applications the technology will take on the identity, literally, of those applications. An example might be Expert Systems. Probably nobody reading this is old enough to remember them but they were a symbolic reasoning technology that people got all excited about fifty years ago. Then the term disappeared from the papers. The reason was not that the technology had disappeared but the reverse -- computing devices just just came with the technology. It was like, I don't know, RAM or something. It was just assumed. I grant you that there will always be a handful of researchers using the term to refer to blue-sky applications, whatever those might be in twenty years, but what we think of as AI development today will be seen as and gathered into and named after the development of specific applications.
That's an interesting observation. I'd agree if it weren't because I think the "AI" buzzword is above that. In the sense that AI encompasses things that we seem to not quite understand well enough--even if "the things" change over time. As soon as we understand, the thing becomes part of the non-fancy group of AI-related technologies that don't get the luxury of falling under the AI name.
AI is a marketing box that serves the purpose of grabbing the attention of outsiders while the inside of the box constantly changes. In fact, expert systems were long time ago inside the AI box and rose to prominence so much that they even overshadowed "AI" for some time. Once they stopped being fancy (because the promises were too ambitious and because we became adapted to their existence), they went out of the box (which still kept the name of AI) and into more common products, like computing devices, as you say.
The "true AI" concept, as it was originally conceived, is so far in the future that this trick will work for a long time.
"What I read and hear is that the large majority of them is humble and modest." I don't doubt about it, but I don't think that's incompatible with my arguments.
"Isn't the fact that exaggeration-ish statements find a fast lane to the press is more a press thing that a research thing?" Well, I think it's more complex than that. If we want to get to the bottom, I'd say it's a money thing more than anything. The press and companies have a synergic relationship where both get what they want.
What I like about your article is how it is expanding our focus beyond AI itself to the properties of the human culture which is developing and using AI.
It seems difficult to understand and predict the future of AI from a purely technical perspective. I'm not sure even the experts can credibly do that.
So, for we in the broad public especially, it seems important that we not know only "what's really going on in AI", but develop our understanding of what is really going on with human beings. Human beings have not substantially changed in thousands of years, and so known facts about how we have interacted with new technology in the past can offer us useful insights in to where we may take AI.
Seventy years ago we developed the first existential scale technology, nuclear weapons. After a lifetime of their existence we still don't have the slightest clue how to make ourselves safe from this technology. The more troubling fact is that we've come pretty close to giving up trying. Even the brightest best educated minds among us typically don't find these civilization ending weapons interesting enough to discuss.
This is the culture which is now creating new existential scale powers such as AI and genetic engineering, with more and even larger powers likely to emerge from the knowledge explosion at an ever accelerating rate, thanks in part to AI.
The primary distorting filter I see in understanding AI is an unwillingness to be objective and honest in examining ourselves. What we need more than a better understanding of AI technology is a mirror.
"The primary distorting filter I see in understanding AI is an unwillingness to be objective and honest in examining ourselves. What we need more than a better understanding of AI technology is a mirror."
I agree with you here in that this is critical for us, but I don't think that's a filter for understanding AI. It's more related to the reasons why we're building AI in the first place--which is a different but equally important topic.
Thanks, point taken. I agree, our lack of understanding of ourselves is not a limiting factor in understanding the technical properties of AI. I should have said "understanding our relationship with AI".
On a related matter: it is not at all obvious (to me) what progress in Generative AI (or whatever you want to call it), is going to look like over the next ten years. The texts that are being generated right now are interesting because they are unexpectedly coherent. But they are not particularly insightful. They are not intelligent in the way that the people you and I think of as being especially bright are. So one direction of research might be to attack this problem: to raise the IQ of text generators. But from where I am sitting that does not look like a well-posed problem. So perhaps progress will run off in another direction altogether.
ChatGPT seems to be quite an upgrade on this. It's apparently more coherent (it still makes up things, as it always happens). I'd like to read the paper
Very interesting essay. However, 4 filters seem to me a little strong generalization to an ample variety of psychological phenomena that arrives when reading and getting knowledge, which includes perception, cognitive biases, and cognitive distortions that we all human beings share at some extent. On the other hand, the “bubble effect” produces a critical effect on what is the information and knowledge at our disposal that we read in the media, creating isolation and redundant feedback on what the algorithms show us to read, and in turn what is the knowledge we get. Anyway, I found very interesting your article that provoke our awareness and reaction to the knowledge we read on the media.
Agreed Ricardo, I mention that at the end. This article is limited because listing everything that influences how we get knowledge would take a book (or a series of books). Still, it's a short manual that covers the main ones.
Here's a prediction you won't see every day -- AI is going to disappear in a decade or two. The reason is that AI as the term is used today is almost entirely aspirational. As it spins out real applications the technology will take on the identity, literally, of those applications. An example might be Expert Systems. Probably nobody reading this is old enough to remember them but they were a symbolic reasoning technology that people got all excited about fifty years ago. Then the term disappeared from the papers. The reason was not that the technology had disappeared but the reverse -- computing devices just just came with the technology. It was like, I don't know, RAM or something. It was just assumed. I grant you that there will always be a handful of researchers using the term to refer to blue-sky applications, whatever those might be in twenty years, but what we think of as AI development today will be seen as and gathered into and named after the development of specific applications.
That's an interesting observation. I'd agree if it weren't because I think the "AI" buzzword is above that. In the sense that AI encompasses things that we seem to not quite understand well enough--even if "the things" change over time. As soon as we understand, the thing becomes part of the non-fancy group of AI-related technologies that don't get the luxury of falling under the AI name.
AI is a marketing box that serves the purpose of grabbing the attention of outsiders while the inside of the box constantly changes. In fact, expert systems were long time ago inside the AI box and rose to prominence so much that they even overshadowed "AI" for some time. Once they stopped being fancy (because the promises were too ambitious and because we became adapted to their existence), they went out of the box (which still kept the name of AI) and into more common products, like computing devices, as you say.
The "true AI" concept, as it was originally conceived, is so far in the future that this trick will work for a long time.
"What I read and hear is that the large majority of them is humble and modest." I don't doubt about it, but I don't think that's incompatible with my arguments.
"Isn't the fact that exaggeration-ish statements find a fast lane to the press is more a press thing that a research thing?" Well, I think it's more complex than that. If we want to get to the bottom, I'd say it's a money thing more than anything. The press and companies have a synergic relationship where both get what they want.