31 Comments

Imagine that I built a car that can go 800mph. And I confidently predict that next year's model will achieve 1200mph. This may all sound quite impressive, until we realize that my "genius" invention ignores the fact that pretty much nobody can control a car at those speeds. That's what I see happening here, highly skilled technicians with a very limited understanding of the human condition, or even that much interest in it.

AI or SI has nowhere to get it's values but from us, and/or perhaps the larger world of nature. In either case, evolution rules the day, the strong dominate the weak, and survival of the fittest determines the outcome. If Altman's vision comes to pass, we humans will not be either the strongest or the fittest. Altman's vision might be compared to a tribe of chimps, who invent humans with the goal of using the humans to harvest more bananas. That story is unlikely to turn out the way the chimps had in mind.

Speaking of chimps, a compelling vision of the human condition can be found in the documentary Chimp Empire on Netflix. Perhaps the most credible way to predict the future is to study the past, and Chimp Empire gives us a four hour close up look at our very deep past. The relevance is that the similarity between our human behavior today and that of chimps is remarkable.

The point here is that the foundation of today's human behaviors was built millions of years before we were human. The fact that these ancient behaviors have survived to this day almost unchanged in any fundamental way reveals how deeply embedded they are in the human condition.

AI is not going to change any of these human behaviors, it will just amplify them. Some of that will be wonderful, and some of it horrific. When the horrific becomes large enough, it will erase the wonderful.

Expand full comment
author

What's worth questioning is whether the premises of Altman's discourse are actually true. Can we build AGI or SI? Will we? Is it inevitable?

In case those are affirmative then what you say makes a lot of sense. And it agree with you (I don't think Altman or others have thought this through from the perspective you suggest).

If the answer is negative (as I suspect), then we have to redirect the questions to why were talking about this. Who does this discourse benefit? What are they distracting us from? And, finally, how are the real, current "AI" systems going to affect the world (for good and bad)?

Expand full comment

Good analysis Alberto.

As you've written, we don't really even know what AGI or SI is, so it's difficult to impossible predict whether it will emerge.

It does seem safe to predict that the current technology will be pushed forward as far and as fast as possible, within the context of some token limitations. So whatever we wind up calling it, AI in coming decades seems likely to advance far beyond it's current state.

As I've written here previously, it also seems likely that AI in whatever form will serve as a further accelerant to the knowledge explosion. So if we are to look at AI holistically, we should consider both the benefits and risks that will flow from AI itself, AND from those fields which AI accelerates.

Expand full comment
author

I think your stance on the knowledge explosion is similar to Harari's idea that humans learn to manipulate the world much earlier than we learn to understand it. It's during the gap in between (if it ever closes, as it may not happen for especially complex systems) that we do the most damage to ourselves and the world around us. It's also during that space that we're the most overconfident: we believe we're playing god with the laws of the universe and in reality, we understand but a minimal part of it. That happened with nuclear weapons, as you always say. Also with the upcoming climate catastrophe.

That's exactly where we are with AI. And, if Altman's vision turns true, that gap won't close but grow larger forever in this field. That's what I tried to convey in my article "gpt-4: the bitterer lesson." (I doubt his premise, though, as I just said)

I think it's extremely important to be super clear about all this to decide which topics should occupy our time now.

Expand full comment

You doubt Altman's premise that something like AGI is coming?

As to which topics should occupy our time...

Ideally (but probably not realistically) we would zoom out from the details of particular technologies to focus on the process generating all these emerging technologies.

As we wrestle with AI questions one thing we might learn is that we're going to be doing such analysis with quite a number of other new technologies over the rest of this century. You know, a century ago in 1923 they couldn't even imagine much of what was coming in the 20th century.

So ideally, we would establish some routine procedures for analyzing the pros and cons of emerging technologies so that we're not making it up from scratch in a sort of random manner every time.

One dividing line that might help us sort the pile is the question of which emerging technologies present a threat to the system as a whole.

Expand full comment
author

"You doubt Altman's premise that something like AGI is coming?"

Yes. I doubt it's coming as soon as they think. That it's as problematic as they think. That it'll inevitably develop into a SI. That something like an AGI is even possible in principle (human-level AI is more appropriate), etc.

Expand full comment

That's certainly a reasonable position Alberto.

What if we dropped the AGI label and concept and just said that AI is almost certainly going to continue to grow in power, destination unknown?

We can probably all agree on the growing in power part, and then can form our relationship with that.

Expand full comment

In April I was in San Francisco and heard Sam Altman speaking. It was at that time that it dawned on me on how much he and the people in the same tech bubble appear to have lost touch with reality.

I couldn't find a better example of Heidegger's Gestell: Tech has become so pervasive, that they can't think out of technical solutions anymore. Every problem needs an AI solution.

But does it?

Great article, Alberto.

Expand full comment
author

"Tech has become so pervasive, that they can't think out of technical solutions anymore." Accurate quote!

I've never met Altman but I agree. Once you take one step back from their discourse, it becomes super clear. That discourse is, however, quite appealing for nerdy techies and extreme techno-optimists...

Expand full comment

In an interview with Reid Hoffmann, Altman was asked the following question: What aspects of life do you think won’t be changed by AI?

His reply: "All of the deep biological things. I think we will still really care about interaction with other people. [...] So I think the stuff that people cared about 50,000 years ago is more likely to be the stuff that people care about 100 years from now than 100 years ago."

I found this interesting because it appears to me he has an understanding of what "matters to people" (the needs 50,000 years ago). What I don't get is how he reconciles this obvious tension with his push for technology.

For me it's way more obvious to say: in 50,000 years we haven't changed that much, so perhaps more tech is exactly not the solution we need?

But maybe it's just me.

Here's the link to the interview in case you are interested: https://greylock.com/greymatter/sam-altman-ai-for-the-next-era/

Expand full comment
author

"What I don't get is how he reconciles this obvious tension with his push for technology. For me it's way more obvious to say: in 50,000 years we haven't changed that much, so perhaps more tech is exactly not the solution we need? But maybe it's just me."

Thanks Christian! Watched that interview and I agree with you. There seems to be an incoherence in there, but technologists (and scientists) often have a hunger for more knowledge that's hard to satiate. Altman recognizes the value and inevitability of our biological endowment but another part of him wants to escape it (in 2015 he wrote about updating our minds on computers as a possible future after superintelligence)

That's the curse and the blessing of humanity: Our curiosity. We can see very far ahead and wonder how to get there even when it's dangerous. We want to manipulate the laws of the universe even when we don't understand them. And we have managed to get out of the evolutionary path that optimized us and into an alien world for which we're not prepared mentally nor spiritually.

That's why we fight all the time against ourselves and the world around us even when we're not aware (more so in developed countries). Technology has brought wonders and quality of life in some ways. In others, it may be our worst mistake.

Altman looks up to Oppenheimer so that should tell you why he thinks that way. Why he's crazy about developing AGI or whatever. It's not that it is inevitable—it's that he couldn't live without the thrill of progress for progress. If you can invent or discover something, you go for it, he'd say, and only then you look back and wonder if it was a good idea (this last part he would never admit, but I think it's accurate).

Expand full comment

Alberto writes, "...technologists (and scientists) often have a hunger for more knowledge that's hard to satiate."

But they only have a hunger for how to get more knowledge, not for knowledge about how to manage knowledge acquisition.

Expand full comment
author

And actually they (not scientists but technologists) don't really have that much hunger for knowledge as in "understanding the universe," but for the *manipulation* of knowledge in some way without taking the trouble to understand what it reveals about us or the world first.

Altman is that kind of person to a much larger degree than he resembles a scientist like Oppenheimer, whom he admires as a role model.

Expand full comment

Good point, technologists are not scientists, a useful distinction.

Another way to look at both is as a new kind of "clergy". That is, leading cultural authorities whose pronouncements are widely accepted by the broad public as a matter of faith.

In the Enlightenment centuries ago we thought we were trading faith for reason. But really we just traded one group of authorities for another group. The cultural authorities changed, but our relationship with cultural authority remained much the same.

Expand full comment

100%. This reminds me of Nassim Taleb and his concept of Neomania:

One common affliction responsible for fragility in the world, writes Taleb, is neomania, the “love of the modern for its own sake.” We are constantly in pursuit of the next big thing but how can you know that something will last if it’s only been around for a year, offering no information on its future longevity?

Time is nature’s greatest filter, eliminating all but the antifragile, meaning that what is oldest today — be it a canonical book, a long-running Broadway show, or the game of chess — has stood the test of time and isn’t likely to disappear any time soon whereas the books and games released tomorrow may become outdated and irrelevant in a year.

https://www.latimes.com/books/la-xpm-2012-dec-20-la-ca-jc-nassim-nicholas-taleb-20121223-story.html

---

In July, the new movie from Christopher Nolan about Oppenheimer comes out. I don't know much about him to be honest, so I will probably learn a thing or two along the way.

Thanks for your response, that's a good discussion. Feels good not to be alone with these thoughts.

Expand full comment

Altman says... "I think we will still really care about interaction with other people".

Two phenomena argue against this:

1) Over the last 30 years vast numbers of people (myself included) have largely traded real world face to face friends for much weaker and more temporary encounters on the Internet.

2) Vast numbers of people care more about their dogs than they do their neighbors, friends and family.

Here's why. Real world human relationships require a considerable amount of negotiation and compromise. The Net and dogs require far less of both. Example: if this post starts to bore you, you can instantly scroll on to something else.

As AI further develops it will give us the illusion of human contact, without the negotiation and compromise. What we want, when we want it, is what will win. Those born in to that world will grab it with both hands, consider it completely normal, and roll their eyes at old people for feeling that talking to bots is weird.

As best I can tell, a big turning point will come when the AI chatbot interface evolves beyond a text to text interface to a human face with sound interface. That will bring in masses of people who aren't nerds.

The next big turning point may come when those of us alive today fade out of the scene and a new generation is born in to what is coming. At that point talking with bots will be normalized.

In the not too distant future bots will do a better job of meeting many people's needs than other people can. It seems likely this will further weaken people to people bonds in general with less than wonderful implications for the society as a whole.

The upside will be that many people who are ignored and discarded today will find some solace in AI bots. The downside of that is that this will make it easier for us to continue ignoring and discarding them.

We should listen to people like Altman regarding the technical details of AI, and the AI business landscape. After that, their analysis of the social implications of AI are really no better than anybody else. I think we get these two things confused quite a bit.

Expand full comment
May 25, 2023·edited May 25, 2023Liked by Alberto Romero

1. One gets the impression from their post that OpenAI haven't thought through the physical and other impacts of radically expanding economic growth.

• How is this to occur? Making more stuff? That obviously isn't a good idea, from an environmental POV.

• Increasing services? That raises more questions:

-- (i) will the exchange value for services that expand GDP be paid to a very small oligarchy of companies, like MS? How will the benefits of that growth be shared?

-- (ii) if SI will usurp the economy of services, what's left for most humans to do - how will their standards of living increase, and what sort of productive work will be available for them?

-- (iii) again what will be the environmental impact? Most of the G7 countries started getting most of their GDP from services back in the 1950s or 1960s -- obviously increases in the service sector nonetheless entrain exponentially growing physical impacts.

• And is the idea that an SI will come up with solutions to all our environmental problems? Sounds a bit magical. And how will it physically effect those solutions? By enforcing its will on us humans? Also, an SI can't bring extinct species back to life, or restart ocean currents like the Atlantic Meridional Overturning Circulation -- but it does require a lot of power to stay running, and all the more so if it starts messing with the physical world.

2. Thinking about regulation and the IAEA model, your cat analogy is apt if you limit the regulatory authority to software development, training, etc. But couldn't part of this IAEA-type approach be direct regulation of the requisite semiconductor ICs as "controlled substances"? Require inventories of current stocks of GPUs and the like, and register all sales of them? And perhaps register sales of semiconductor fab equipment, and possibly some 3D printers? This would give enforcement authorities a handle on who truly has the potential to implement an extralegal SI, and perhaps a separate legal basis for derailing those villains. My cat may be smarter than me at finding spaces I can't reach, but even he can't build a secret fab.

Expand full comment
author

"OpenAI haven't thought through the physical and other impacts of radically expanding economic growth"

This is very important. I agree with you. It's hard to imagine how the world would change in case their predictions materialized. They don't care because that's not something to figure out now. They just trust their intuition and their ability to create a safe SI. Then, the SI would figure out the rest, whatever that is.

It's a mix of wishful thinking and detachment from the reality of people.

Expand full comment

Great analysis - I JUST published my own take on it too, wondering why nobody was talking about this :)

Expand full comment
author

Thanks Jurgen :) People from AI ethics have criticized the post heavily actually!

Expand full comment

I've missed that (probably bc I'm not on Twitter haha).

I like how you approached the subject. What stood out for me most is the way they [OpenAI] use the terms AGI and superintelligence, and, quite deliberately, not clearly defining both of those for retorical purposes.

Expand full comment
author

They're worth following because they always highlight the other side of every question. The one the most prominent voices (like Altman) try to cover up.

I agree, both terms are merely marketing terms. I think most people aren't buying the story, though. That's good!

Expand full comment
May 24, 2023Liked by Alberto Romero

I am sorry to say this. I should be circumspect and reflective and respectful in the way you are but he sounds like someone who has lost his mind. Or Elon Musk. Both/and.

Expand full comment
author

Yeah, I understand where you're coming from. I do it this way because using ad hominem attacks is what I see the ethicists I was criticizing the other day do. I think the appropriate way is to counter their arguments with more arguments. And when there are no arguments because they're talking about science fiction, say that as well.

Expand full comment
May 24, 2023Liked by Alberto Romero

What you’re doing is essential and I thank you. I am being ad hominem and it’s not effective except sometimes it does help to say WHAT? in order to maintain epistemic clarity in situations that challenge clarity. And if there wasn’t a flood of such rhetoric, and ideas were prevented provisionally with the necessary caveats…But the way you do it is very needed.

Expand full comment
author

"And if there wasn’t a flood of such rhetoric, and ideas were prevented provisionally with the necessary caveats…" I feel you.

Expand full comment
Feb 29Liked by Alberto Romero

Fantastic article! Your insights are not only well-researched but also presented clearly and engagingly. Artificial Superintelligence (ASI) represents the pinnacle of human ingenuity and technological advancement. As we venture into the realm of creating machines that surpass human intelligence, we stand at the precipice of a new era, one where the boundaries between science fiction and reality blur. I have researched a article and found it very knowledgeable do read it https://www.mobileappdaily.com/feed/what-is-artificial-superintelligence

Expand full comment

I have only three coherent points:

1. “The (human) governance of superintelligence”, is the most painfully humorous and incredibly naive concept I’ve ever heard of.

Let me try to frame this in a way that’s a hilarious metaphor. Let’s consider some of the most talented fliers on the planet: the crows. Now imagine that there was a large flock or gaggle of crows and they were clearly above-average and got to talking. Keep in mind that crows are some of the most intelligent birds on the planet, so I know it’s a bit of a reach but just bear with me. Imagine that one of them is named Bob, And he sort of fancies himself as the leader of this bunch, so he marshals them all together and makes this fabulous pitch.

He’d start out saying something like this: “You know guys I’ve been thinking. We are a really talented bunch of flyers. We’ve been doing this since birth; we are super agile in the air and we never bump into each, or hardly ever; we’re so talented at flying that we can land on powerlines and tree branches. We are just incredibly good at this aviation business. So I’ve been giving it a lot of thought and I think we should take over and manage this organisation called United Airlines. It might be a bit of a challenge but I think we can do it.

Tim: “Do you really think we could pull that off??”

Bob: “Well, we’ve got a reasonable shot at it. It might be pretty involved and downright complicated, but we’re not so dumb. I mean we’re talking to each other now, all be it in kind of a squawky pre-language way. So we’re pretty sophisticated, am I right?”.

Steve: “So what would we have to do? How can we take over and actually manage this, what do you call it,…an airline?

Bob: “Well we each of us would need to assume different roles in a corporate hierarchy. Somebody would have to be president and CEO. I think I’d be good at that, so I’ll take a turn at that if you don’t mind. But we also need a board of directors; a director of flight operations; a chief pilot; head of maintenance; a head of HR…..”

Steve: “Whoa, whoa, whoa, what’s “HR”?

Bob: “Well it’s like “CR”. You know, like Crow Relations, except for these creatures called “humans”, who are really a lot more finicky, complicated and much more involved to work with. They’re going to want things like pay, sick leave, health insurance, rest on duty period delineation, hiring and firing policies, for start…. And other crazy things like sexual harassment policies; we’re going to need those too”.

Steve and the rest of the crows: “…..huh??, …Wah??” (Generally looking more confused than a crow has ever looked).

Bob: “Oh, and then some of us are going to have to actually learn how to fly these rather large and complicated things called airplanes. Oh, and maintain them. And regulate them. But hey! We’re crows!”

***

So, if you haven’t figured it out by now, we’re the crows. We may have a sense that some of us are pretty smart, but that’s mostly half-assed aspirations. Truth be told, we don’t even begin to know what the fuck we’re doing, even abstractly, when it comes to “governing a super intelligence”. I mean, look, I’m a human type-rated 747 pilot, and I couldn’t even begin to tell you how to run an airline schedule. I wouldn’t even know where to start. The crows? Fugetaboutit. The humans governing super intelligence?? Fugetaboutit!!

2. “AI is the tech the world has always wanted” -Sam Altman

Well maybe not all of us, but a lot of us. Me included. I want it for very personal and selfish reasons. Namely, my grounded lack of confidence in humanity to move us forward as a civilisation in any way that’s not about fits and starts of greed and corrupted incentives. Let me illuminate further.

I have a chronic, degenerative, incurable health condition. It may not ultimately kill me, but it might. I seriously doubt humanity alone can get their shit together in any coordinated manner enough to help me get over this situation without the help (and rather undeniably blatant inroads to assisting in the creation of a cure) from AI/ AGI. Human crafted capitalism has a habit of profiteering from the treatment of disease, not curing it. (There’s BIG money in treatments, not cures). So yes, I want AI/ AGI, or even SI in my corner. I don’t trust humanity to do right by me, or even its capacity to do right by itself. I’m 63, so I don’t truthfully want humanity to get all timid if there’s a 10% chance of human extinction. I’m fine with a 90% chance of survival. I’ve personally been through worse. Does that make me selfish? Perhaps. That’s a human nature quality. Does that make me timid? Fuck no! And I’m guessing a lot of humans feel the same way. Throw the damn dice already!

3. Realize that humanity is, in the big picture, just a boot-up species for Super-intelligence. We are the dinosaurs or the Intel 386 chips running on a mix of DOS or Windows 3.1, of our era.

People get all worked up in their worry about AI goal misalignment, but it’s really the human bad behaviours that are completely out of alignment with long term survival. This actual Human conduct has already demonstrated that it’s the real threat to all that lives. AI/AGI/ SI will at least be coherent in goal attainment with a statistical likelihood that humans could never match. Humans will never be uniformly coherent about anything. That human quality has serious drag on civilisation and it’s long term prospects. Humans are generally the problem, just like religion (a human invention and maladaptive practice) tends to fuck up everything it touches.

Expand full comment