Discussion about this post

User's avatar
Phil Tanny's avatar

What I like about your article is how it is expanding our focus beyond AI itself to the properties of the human culture which is developing and using AI.

It seems difficult to understand and predict the future of AI from a purely technical perspective. I'm not sure even the experts can credibly do that.

So, for we in the broad public especially, it seems important that we not know only "what's really going on in AI", but develop our understanding of what is really going on with human beings. Human beings have not substantially changed in thousands of years, and so known facts about how we have interacted with new technology in the past can offer us useful insights in to where we may take AI.

Seventy years ago we developed the first existential scale technology, nuclear weapons. After a lifetime of their existence we still don't have the slightest clue how to make ourselves safe from this technology. The more troubling fact is that we've come pretty close to giving up trying. Even the brightest best educated minds among us typically don't find these civilization ending weapons interesting enough to discuss.

This is the culture which is now creating new existential scale powers such as AI and genetic engineering, with more and even larger powers likely to emerge from the knowledge explosion at an ever accelerating rate, thanks in part to AI.

The primary distorting filter I see in understanding AI is an unwillingness to be objective and honest in examining ourselves. What we need more than a better understanding of AI technology is a mirror.

Expand full comment
Fred Hapgood's avatar

On a related matter: it is not at all obvious (to me) what progress in Generative AI (or whatever you want to call it), is going to look like over the next ten years. The texts that are being generated right now are interesting because they are unexpectedly coherent. But they are not particularly insightful. They are not intelligent in the way that the people you and I think of as being especially bright are. So one direction of research might be to attack this problem: to raise the IQ of text generators. But from where I am sitting that does not look like a well-posed problem. So perhaps progress will run off in another direction altogether.

Expand full comment
8 more comments...

No posts