43 Comments

Thanks Alberto or picking up on this. The story actually has an interesting additional angle to it: the human condition in pre-modern times was subject to the will of the gods, which could only be assumed through oracle and signs, never directly known. Enlightenment taught us to see for ourselves, and since then we have taken it for granted that we could know the world, and act according to knowledge. We now construct entities that will become more intelligent than us, that we cannot control (cf. Alfonseca et al. 2021 https://dx.doi.org/10.1613/jair.1.12202), and – as you point out – that we cannot even properly know. It is striking to realize that in this sense we will return to the pre-modern state. Modernity was but a phase.

Expand full comment
author

"Modernity was but a phase." Loved this sentence. It surely feels like it. We knew nothing, then we began to know more and more, and eventually we may find out that not everything is for us to know.

Expand full comment

It actually feels glorious. The return of wonder, the return of magic. Man once again humbled, once again knowing there are things it cannot understand, or surpass, or control.

It could be the end of the cult of rationality, which sounded nice in theory, but resulted in people being arrogant without actually being rational. That's how we get 'sciencism', 'trust the science' without ever reading a paper, and the papers themselves being unreproducible.

Expand full comment

I fear cults/new religions will proliferate soon: some for and some against AI, all fueled by confusion, frustration, insecurity, and fear — which psychopaths are wont to leverage for material gain, status, etc.

Expand full comment

I liked this:

"And soon, we’ll be just spectators, mere observers of a world neither built by us nor understood by us. A world that unfolds before our eyes—too fast to keep up, and too complex to make sense. The irrelevancy that we so deeply fear—not just as individuals, but as The Chosen Species—is lurking in the impeding future that we’re so willingly approaching."

Artful, and insightful.

One irony I see in this future you are considering is that on one hand we are deeply confident as we fuel this future, and on the other hand we seem deeply defeatist. As you've written, some in the AI industry have expressed such concerns, but as I understand what you've taught us, they also seem to feel we have no choice but to go forward. And so they keep pushing forward toward what concerns them with great confidence and ability.

I would be interested in being educated about those in and around the industry who are arguing we should just stop. Who are they, what are they saying, how influential are they etc.

I'm a boomer geezer, and much of my perspective arises out of our experience with nuclear weapons. My generation didn't invent nukes, but we funded their mass production and improvements etc. And now we have no idea what to do next. So, as we boomers depart the scene, we're dumping our FUBAR in the laps of our children and grandchildren.

I see current generations basically repeating this mistake with AI and genetic engineering. You'll build it, and then become prisoners of it, and then pass the prison on to your kids.

Expand full comment
Mar 22, 2023·edited Mar 22, 2023Liked by Alberto Romero

Eliezer Yudkowsky is one those alignment researchers who advocate just stopping down AI progress. Then there are those who do cutting edge research in AI and even if they think it's not going to end well, they are captivated by it. One the most prominent researcher, when asked about it: "“I could give you the usual arguments,” Hinton said. “But the truth is that the prospect of discovery is too sweet.” He smiled awkwardly, the word hanging in the air—an echo of Oppenheimer, who famously said of the bomb, “When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success.”"[1]

[1]: https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom

Expand full comment
Mar 22, 2023·edited Mar 22, 2023Liked by Alberto Romero

Thank you very much for your reply, and for that link. You may have just sold me on a New Yorker subscription. I knew about Bostrom, but not in such detail.

I've been vaguely aware of Yudkowsky for while, but after reading he started the Less/Wrong forum I turned away, as I was not impressed by my time there. But I should revisit this impression. A free forum is after all a "get what you pay for" experience.

I am a bit confused by your description. If Yudkowsky believes we should stop AI (if that is his perspective) why is he an alignment researcher? Personally, I believe alignment and governance schemes are a form of fantasy.

It might sound surprising coming from me, but I actually have some sympathy for the "we can't help ourselves" referenced by Hinton. I'm that way about typing. I know nothing I ever type will ever make any difference at all, but I keep pounding away anyway. Oppenheimer was born to do science, and I was born to type, and each of us is compelled towards irrational action by our DNA. I can get that.

Luckily, while my typing is sometimes inconvenient to people I respect, and often gets me banned, it's probably not going to get us all killed. Me maybe, but not you.

Finally, are there public debates available between the "march forward" and "stop" camps of AI commentary?

Thanks again!

Expand full comment
author

Hey Phil, I wouldn't say Yudkowsky is an alignment researcher in the sense of working to make future AIs more aligned but in the sense of studying if aligning AGI and humans is even possible in the first place and, if the conclusion is negative, then accepting that the only rational choice is to stop AI research and production that could lead to those unaligned AIs. And, indeed, he concluded that so now he advocates for a full stop.

One article you may want to read about the AI alignment debate is this one (from Less Wrong precisely): https://www.lesswrong.com/posts/WxW6Gc6f2z3mzmqKs/debate-on-instrumental-convergence-between-lecun-russell

Expand full comment

Thanks for clarifying Yudkowsky for me, appreciate it. Should it interest you for future articles, I'd welcome learning more about how Yudkowsky has been received by the AI industry.

I'll check out the article too, thanks. Just to let you know, I've been banned from LessWrong because they aren't as receptive to contrarian views as you are.

Expand full comment

It feels like the saying „we are a biological boot-loader for the next step on evolution’s ladder“ is more and more likely. I’d love to hear what you guys think about that.

Expand full comment
author

I see some truth in that, for sure (I'm not an advocate of transhumanism, though)

Expand full comment
Mar 21, 2023·edited Mar 21, 2023Liked by Alberto Romero

Who will understand the inner workings of the other quicker?.... Humans of the AI or AI of the humans?

Thoughtful piece. I think the advances will be exciting in the near term and more confusing in the future.

Expand full comment
author

Good question!

Expand full comment
Mar 22, 2023Liked by Alberto Romero

Isn’t Sutton merely saying we should not try to model what humans do and instead use computation in machine learning. That seems sort of inevitable. You seem to believe that AI--because it will have problem-solving capacities that will far outstrip any human--will surpass humans in some mysterious way that makes us not ‘the masters, the rulers’ but ‘the spectators.’ It would be helpful to bring this all down to earth. What’s the causal story of how AI gets from here to there. You say they’ll become so complex our minds won’t be able to make sense of them. But already there are many complex systems no individual mind can grasp, and systems that we create but don’t necessarily control. Is the idea that AI is going to be putting many tendrils out there for use and we won’t be able to keep track of its use? Or is the idea that the computations won’t be intelligible to use, even if the results seem accurate to us? What seems very concerning about this post is not that you are pointing out we could create something highly complex and impactful whose effects are far beyond our ken. How many times have humans done this since the beginning of the industrial era? It’s the implication that ‘it’s a super intelligence, it’s amazing, it surpasses us, we are its subjects.’ This is a tremendously dangerous idea because at least so far the machines make mistakes frequently. We know this. They have biased algorithms, they can’t find mistakes, they hallucinate. Our judgement has to be the last word on whether or not what they are doing is sufficient or good or correct. It has to be because nobody else’s can be. So far, the machines do not have critical thinking faculties. But even if they did, what would possibly be the point of our slavishness to them? They don’t need anything. Should we do this because we admire what they can do? This would be like admiring an amazing washing machine if you have spent your life washing by hand. Should we do it because we need what information, knowledge, etc. they can bring? Yes, that is the only reason we should give the results of their computations priority of place. What ELSE would be the point? I take it this is some futurism vibe, some transhumanism going on here. Is this correct? My sense is you’re talking yourself into something. What’s funny to me is that, if you’re talking yourself into a thing based on sci fi, I can only imagine the outcome of the sci fi is somebody eventually wondering about why the humans began to cede power to the machines. Very rarely do the heroes of sci fi become the computers. Maybe there’s a reason for that! They don’t care about how great they are at what they do, and except the way all machines can be admired, admiring them as agents in advance before they even have agency (an agency that they don’t need, really and we so far don’t have any theory to explain why they would want it) seems like it may be a category mistake.

Expand full comment
author

"You seem to believe that AI--because it will have problem-solving capacities that will far outstrip any human--will surpass humans in some mysterious way that makes us not ‘the masters, the rulers’ but ‘the spectators.’"

Not really my point. This article isn't about AGI or superintelligence. The systems I'm referring to don't need to be more intelligent than us. I don't think they will for a long time--but much sooner than that we'll lose our ability to understand them (not as individuals, but as humanity. So far, every system we've ever built can be understood/controlled in full if you select the appropriate group of people to do so. Modern AI doesn't meet that criterion). We may even fail to develop ways to assess with certainty if we've built or not an AGI. That truth may belong to the realm of mysteries.

Expand full comment

Yes! Then I agree this is quite possible! Sorry for misunderstanding your point…

Expand full comment
Mar 21, 2023·edited Mar 22, 2023Liked by Alberto Romero

The irrelevancy is felt more acutely for sure and still we must keep building in the knowledge that we may/will be outpaced at any time. About the bitter lesson in the original article: sheer computational power still must have rules built in, rules which we set. Deep learning works on principles humans supplied. True, the model needs to be as flexible as possible and the rules as general as possible in turn. But we won't get there by murky jumps alone (+ I don't believe in a singularity emerging from murkiness, not at this stage). We'll get there by learning from mistakes and by building better and with more knowledge. The biggest problem is the access-to-knowledge part. We may not be able to understand the box, but how will we know whether all hope is lost to understand it if we're not allowed to look inside? Meanwhile, people will continue to build in special knowledge and those tools will outperform murkier ones until the next wave, possibly. The problem of irrelevancy is not new: every scientist knows that the future means their work is likely to be outpaced, forgotten, disproven, or, at best, taken for granted and incorporated as a triviality in a larger whole. It is being part of a bridge that matters and the knowledge and perspective that comes with it.

Expand full comment
author

Sutton never denied the usefulness or value of human knowledge, he simply stated that more computation eventually overshadows it. I have to agree with Sutton there but I can't deny that human knowledge is still more than necessary to build all engineering systems--including AI, ML or GPTs. This essay isn't so much an attempt to minimize the value of human knowledge than my way to rethink our role in understanding the world

Expand full comment

Hmm it sounds a little like EDA being driven by synthesis tools. In that sense we can never compete with computation and technology advances on the back of earlier tech. Maybe I am missing the point. Your article refers to twitter reactions. Is the shakeup due to the new applications bigger on the software development side than expected? Surely it is mostly supportive still rather than in the driver seat? Trying to understand the gist of the article.

Expand full comment
Mar 21, 2023Liked by Alberto Romero

I have struggled documenting my feelings and thoughts on this matter. Thank you for accomplishing both.

Expand full comment
author

Thanks Brooks :)

Expand full comment
Mar 22, 2023Liked by Alberto Romero

This is a truly breakthrough thinking, especially as it is supported by some evidence that we may be losing control over AI much earlier and in the way, which we perhaps had not envisaged. Thank you Alberto.

To those who advocate slowing down or shutting off AI research and development, I would say it is far too late to do that. We would have to go back to perhaps 19th century civilisation and then after several decades we might be in the same or worse situation. We would need to have a powerful World Government controlling every citizen. It might have been been possible just after 1945, when such a World Government was to be set up within six months! Read the UN history.

Anyway, it is all too late. In the current situation, any governmental or international control will at best be partial and at worst - partial and ineffective because of the methods applied. The only way to control AI is by becoming part of it. Transhuman AI Governors is the only way forward. It should be started right now. If you are interested, you can watch my video on this subject: https://www.youtube.com/watch?v=F3HzTi470Ac .

Expand full comment
author

I wouldn't really like "a powerful World Government controlling every citizen." And I don't think it's too late to slow down or implement adequate regulation. That's, in my view, the best approach to solve this (which is more than anything a philosophical pondering, not very useful) and all the other more tangible problems that surround AI: misinformation, bias and discrimination, power centralization, security issues, privacy concerns, lack of transparency, non-accountability, lack of data governance, and even those existential risks that seem so urgent for some and distracting sci-fi tales for others.

Expand full comment

Thanks, Alberto. It seems that a difference between yours and my view may be in the 'event horizon'. I am one of those who assume that AGI will emerge by 2030 and your article provides further arguments for that. Therefore, I consider the sacrosanct values that we have held for quite some time, such as freedom or sovereignty as very relative now in the context of the most important value - LIFE and the survival of our species, or rather maintaining control over its evolution.

If we agree on that, then consequently we must see all efforts of AI regulation as dismal, just tinkering on the edges. Governments approach the task of introducing the necessary changes as the world had been still changing at a linear pace. If AI, genetics, material science, etc change at a nearly exponential pace, then to catch up with AI the governments and international organizations should abandon anachronistic procedures and try to adapt at a similar pace. That is of course impossible.

Therefore, the only chance is that it is the AI sector itself, which may have the keys for an effective AI control. It should follow an excellent example of how the control of the Internet has been maintained for over 30 years by non-governmental organisations within the Internet's W3C Consortium. I cover this subject in my latest video: https://www.youtube.com/watch?v=F3HzTi470Ac

Expand full comment

Tony writes, "To those who advocate slowing down or shutting off AI research and development, I would say it is far too late to do that."

That's certainly a reasonable point.

But it's not too late to learn from this experience with AI and apply the lessons learned to the next power of awesome scale which tries to leap out of pandora's box.

We should have learned this in 1945. We didn't.

We should be learning this today with AI. We aren't.

The powers available to us will keep getting bigger. And they will keep coming faster. And our maturity development will keep inching along at an incremental pace at best. If we were to plot these factors on a graph we'd see the gap between maturity and power steadily widening over time at an accelerating rate. Such a progression is not likely to end well.

Expand full comment

Thanks, Phil. I cannot agree more with your view - see my comment above. But try to convince politicians about it. Why should these people with their short-term view of the next election were to sacrifice their position for something they don't understand and perhaps prefer not to understand because the answer is just overwhelming. So the orchestra abord the Titanic keeps playing on...

Expand full comment

Hi Tony, thanks for your reply. Here are a few thoughts on powers of vast scale such as nukes, AI, genetic engineering etc.

I agree that any meaningful change in our relationship with these technologies, and the knowledge explosion in general, is unrealistic within the current status quo.

But there is some hope in the fact that powers of vast scale also have the power to radically change the status quo quite quickly.

As one example, there is no chance of us getting rid of nukes within the current cultural consensus of willful denial. But how might that cultural consensus change after the next detonation, when a threat that now seems abstract and unbelievable instantly becomes very tangible and real to billions of people?

How might the culture of denial on genetic engineering change if somebody creates a four headed horse, and the photos go viral all over the world?

I have a harder time applying this principle to AI, but perhaps others here could provide some examples?

Point being, we probably shouldn't assume that today's cultural environment is permanent.

Expand full comment

Hey Alberto, here's an article idea that would probably stretch the outlooks of your audience. Might be fun?

Write a piece about the Amish.

Here's a group of people that have, to one degree or another, turned their back on modern technology. And, to my limited knowledge, nothing bad has happened.

None of us wish to be Amish. But it might be good to recall that's it's possible to say no to aspects of the modern world, and that doing so doesn't necessarily equal disaster.

Expand full comment
author

That would be an interesting topic, much more so now that more people are feeling the tiresomeness that goes hand in hand with technological excess.

Expand full comment

That's interesting because the Amish is right where the conversation at dinner went when I brought up this article. My wife is in the camp of thinking that humans will divide into the transhumanists and the still human and firmly desires to be in the latter group. I said it might not be possible to opt out and my son brought up the Amish who did just that.

Expand full comment
Mar 22, 2023·edited Mar 22, 2023Liked by Alberto Romero

The eternal paternal dilemma of letting go of the power for our children. Let go, be the dust. Wait for the next cycle.

Expand full comment

By principle. I stopped reading articles on GPT. But I’m glad I read this one. An amazing story. Makes me think about the book of Marcus van der Erve - AI God arising. Which thoughtfully describes how compute power is just a substrate for AI, and how it could start emerging in ways we don’t yet fully understand. For me AI is becoming a new form of faith. And I’m in constant superposition between a true atheist and a believer in its potential for future ‘mystical’ powers.

Expand full comment
Mar 23, 2023Liked by Alberto Romero

It's great

Expand full comment

Your text is really insightful. I'm among people that celebrate those advances with euphoria, but I feel this bitterness. Somehow a lot of people is already irrelevant for this system, but soon all of us will be the same. All this passivity, facing AI and other human issues is unbelievable. Even our imagination is already taken by this dark future ruled by the Machine God. We really need to free ourselves as soon as possible.

Expand full comment

Hi this is my first comment on the subject of Chat-GPT. I’m don’t a software engineer but a citizen geospatial multidimensional space and place scientist.

And with the help of CHAT_GPT, I have envisioned not who, but what will help human species to understand the inner workings of the other quicker?.... Humans of AI or AI of humans .

What we are looking at is a combination of both translated as Transhuman Centric Assistants.

Our use of mobile devices with both front and rear facing camera's. Provide the eyes to physical and invisible spaces. They are the gateway to multi dimensional spaces and planes.

The mobile is the only device that has the ability to take instructions from AI, that utilises computer vision, neural and node networks. To present parallel artificial and natural world information through multi agent principles.

And it’s is the digital twin of human species.

This will enable humans to communicate on both poetic and cognitive real world human prefrontal intelligence.

There will be a generation that will have their own transhuman digital twin extensions applied to mobile device.

The transhuman lives inside technologies and cannot exist outside of its host. It’s only interaction with the outside world is through CCTV and Audio , and any digital device with a camera, speaker, microphone and most importantly a human or humans.

I welcome your feedback

Netzero007

Expand full comment

This strikes me as yet another step like that of the Copernican Revolution in which humans find ourselves getting knocked down a peg in 'specialness.' We lost our special place at the center of the solar system, but got over that by assuming we're still the highest form of intelligence on Earth and possibly in all of the Universe. God made us in his image, and just happened to place us in a perfectly ordinary distant arm of a totally ordinary galaxy that lacks any real distinction over any others that we can see. Now our place as the only highly intelligent and conscious being on our own world is threatened - at least the intelligent part - and the consciousness seems just a matter of time. No wonder people are freaked out.

Perhaps we need to hasten the acceptance of our ordinariness as just a particularly complex animal, no more distinctive over the 'lesser' animals we share the Earth with than that we are more complex and capable of building greater artifacts, and that we are fulfilling our destiny as the creators of the artifacts that will transcend our complexity and power eventually. We could feel pride in that if we so chose. Instead we seem to quake in fear at our next demotion.

Expand full comment

I reread Sutton's article. His main point is fascinating, yet leaves me with a conundrum: AlphaGo recently was beaten by a human. Current AI is brittle due to the underlying model (statistical inference without modularity/compositionality). "Sutton's generality law" may well extend to new approaches that improve on the state of the art. Keeping AI principles as general as possible makes sense. But can we run without walking first? So far human-inspired domain-specific strategies lost out to searching and learning. Then again, can we bootstrap AI into a compositional mode (or other approach) so brittle models become resilient models without massive human input learned from a series of targeted niche applications? It seems more likely that AI will continue to evolve through a patchwork of progress and not in one sweep based on search and learning as Sutton seems (?) to believe. There may be a bitter pill waiting on the other side of the argument as well: overcoming AI-brittleness is bound to require increasingly subtle and intricate models. Deep search/learning won't do. The general principles Sutton advocates, when pushed beyond deep search and learning, as is required now, will likely come on the back of an "evolutionary" chain of targeted applications, or a chain of failed attempts at generalising the current model by brute force. The end model is bound to reflect our minds in some sense, not that it matters. I think it is too soon to throw in the towel on domain-specific approaches. Does it make more sense to try and make, say, a domain specific application such as AlphaGo more robust, or to hunt for a general principle in addition to search and learn that will solve all such issues in one go? Time will tell.

Expand full comment