17 Comments

For most practical purposes, from the average layman's perspective, AI might as well be voodoo. As long as it helps them do what they need to do somewhat reliably.

Which is back to your point: Unlike alchemy, AI works. Hallucinations aside, it does deliver on many fronts.

But it certainly wouldn't hurt for us to get a better grip on the processes inside the black box, that's for sure.

Expand full comment
author

Agreed Daniel.

Expand full comment

Doctor Boomer Doomer :-) replies...

We don't actually know whether it will hurt us to get a better grip on the processes inside the AI black box. We're assuming, typically without any questioning, typically as a matter of unexamined faith, almost in a religious manner, that more knowledge is always a good thing.

That "more is better" philosophy is the foundation upon which the modern world is built. We keep adding more and more floors to the skyscraper of the modern world, building the tower of knowledge and power higher and higher. We're transfixed and mesmerized by the glorious edifice as it rises in to the sky, because it's our own glory we see ascending, and so we are distracted from examining the ancient philosophical foundation upon which it all depends.

One of the things AI can teach us is that we're engaged in an experiment to see how much weight that foundation can hold. So, in a sense, AI is a kind of science, because that experiment is probably going to provide a credible answer.

Expand full comment
Sep 28, 2023·edited Sep 28, 2023Liked by Alberto Romero

It's a Doomer. It's a Boomer. It's Phil the man!

My mention of the "black box" is in reference to the fact that while OpenAI and other companies know how to build functioning LLMs, the processes that make these LLMs work the way they do aren't all that well understood. It's a bit of a "throw more compute, more data, and more extensive training at them and see what happens" approach.

As it stands, the generative AI genie is not going back into the bottle. It'll become far more ubiquitous in the coming months and years.

Given that that's the case and that we will continue using this genie to grant more of our wishes, we could at least try to understand what makes the genie do his magic, so we're better equipped to deal with any unexpected (and potentially negative) outcomes he might throw at us.

I'd argue that, regardless of where you stand on the "AI Doomerism" / "AI Utopia" spectrum, you should be happy to advocate for a more fact-based discussion, grounded in a better understanding of the very thing we're arguing about.

Expand full comment

Hi Daniel,

Yes, I understood what you meant by the black box. Gen AI works somehow, and we're not sure exactly why. Looking deeper in to the black box might be constructive and helpful, or what we find may unlock more powers that we'll have to figure out how to manage. Nobody knows which.

That digging ever deeper in to AI, and across the technology spectrum, is based on an assumption that we can successfully manage WHATEVER we discover. A fact based conversation should include the fact that there is no proof that this assumption is true, and the fact that the evidence is pointing in the opposite direction.

Nuclear weapons = no idea how to make safe

Genetic engineering = no idea how to make safe

AI = no idea how to make safe

You want a better understanding of the very thing we're talking about. That's not AI. That's human beings.

Are human beings gods? If not, then there must be some limit to how much power we can handle. Whatever that limit might be, we're moving towards it at an accelerating pace.

The perspective you're articulating is actually the pessimistic view, because it assumes that there's really nothing we can do, that we have no choice in the matter, that racing forward in to the dangerous unknown is going to happen no matter what, whatever the consequences.

And I think you're most likely right.

But, if it pleases the court, I'd like to hold on to the fantasy that we are capable of reason just a little bit longer.

Expand full comment

You might find this an interesting read "Reclaiming AI as a theoretical tool for cognitive science" (https://psyarxiv.com/4cbuv/)

I quote: "One meaning of ‘AI’ that seems often forgotten these days is one that played a crucial role in the birth of cognitive science as an interdiscipline in the 1970s and ’80s. Back then, the term ‘AI’ was also used to refer to the aim of using computational tools to develop theories of natural cognition.

Ergo: AI historically was considered (part of) science and nothing is holding us back from considering it science now.

Expand full comment
author

Thank you for the link Jurgen! Historically, yes. The arguments that dismiss it as alchemy now are about machine learning and deep learning, paradigms where more data and compute are preferred to more theory and specific insights about how things work (which as LeCun says, isn't bad per se). So in a way, AI has changed *a lot* since the early days, which the paper seems to refer to. AI is radically different now than in the 70s and 80s (the whole symbolic vs. connectionist debate is about those differences). In that sense, the alchemy analogy is warranted. But, as I argue, how AI is done, even today, is still very defensible from the right perspective.

Expand full comment

I absolutely agree with you, developing AI now probably feels more like dark magic than cold, hard science. I think your analogy is on point. One perspective I miss in your piece is a critical note; from my point of view it feels like at least some AI companies are intentionally using science as a front to report on their successes.

Expand full comment
author

Yes, that's true. DeepMind, for instance, has been publishing a lot of its research in peer-reviewed journals instead of just arXiv. That's a better procedure, more rigorous. But can those experiments be replicated even if they're published? Do independent researchers have access to the data or the algorithms? Most of the time that's not true. And if we go deeper, is anyone working really hard on trying to understand why neural networks are so effective? How do they work?

Those are the "science" questions we are missing. It's a question of degree, I agree, but nowadays that degree is very low. Which, again, might not be *that* important for the time being. They're more explorers than settlers, I'd say.

Expand full comment

Any day Alberto drops a post is a good day!

In this one, he evoke the age old antinomy between science and alchemy.

Where does AI land on this spectrum?

In his eloquence, Alberto slides AI forwards and backwards down the spectrum through his keen analysis arriving a provocative tension/balance point.

Alberto's point is that from a historical perspective --- alchemy and chemistry looked like equal candidates --- striving side-by-side towards a comprehensive explanatory system.

To extrapolate --- at this moment in time, AI "science" is still in protean form in search of an explanatory system. This does not mean that it isn't a highly refined technical discipline --- but it has arisen to the level of a science yet.

A very interesting argument.

Have a read and see what you think!

Expand full comment
author

Thank you Nick!!

Expand full comment

I like your final sentence very much. I am writing a series of articles on Substack on the effect of shaming on innovation. It seems that AI researchers are currently shamed for innovating. The flip side is the hype which does not do them any favours. I don't see the lack of a rigorous scientific foundation as a major issue. Deep learning steers close enough to known physics applications and numerical analysis (the latter of which in maths at times comes with a bad rep due to lack of a fully rigorous foundation). The proximity to these areas makes the approach palatable. It is a concern that prior approaches have been swept under the carpet. I think that this will come full circle. Not addressing compositionality is a major issue and the murkiness of the current approach is not helpful to begin to incorporate this key aspect. I liked LeCun's take on the next steps and agree with him on the insights he intends to apply from physics. I believe this is the right route going forward.

Expand full comment
author

Fully agree, Michel - quite often reality is more nuanced than the extreme positions would make it seem. I have more coming with this same nuanced vibe but on different topics.

Expand full comment

It’s not a science ... it’s a rhetoric ... which is an alchemy of sorts. Rhetoric has always been about probabilities not logical proofs.

Expand full comment

Unfortunately, I only just came across this post, but I quickly thought of a paper written in 1965 by philosopher Hubert Dreyfus, then at MIT, who was of the phenomenology/existentialist school. The title was "Alchemy and Artificial Intelligence". Although I was taking an AI course from Marvin Minsky around that time, I don't know what Minsky's reaction to the paper was. He must have had some "interesting" conversations with Dreyfus.

There's an interesting observation in the preface. It is:

"The attempt to analyze intelligent behavior in digital computer language systematically excludes three fundamental human forms of information processing (fringe consciousness, essence/accident discrimination, and ambiguity tolerance). ... Significant developments in artificial intelligence ... must await computers of an entirely different sort, of which the only existing prototype is the little-understood human brain."

I haven't reread the paper yet, but it probably offers some useful insights on this topic. It can be downloaded here: https://www.rand.org/pubs/papers/P3244.html (It's long - 98 pages.) Dreyfus had been examining the work of Allen Newell and Herbert Simon. He also wrote a book published in 1972: What Computers Can't Do. See the Wikipedia article about Dreyfus for more information.

Expand full comment

Interesting comparison point is the way that scientists discovered treatments for cancer. In Siddhartha Mukherjee's ethnography of cancer, The Emperor of All Maladies, one of the things that was most striking is the complete ignorance of the mechanisms of treatment during initial clinical trials. It was essentially "this drug stops cells from dividing -- I wonder if we can just use this on patients until they're cured even though it looks like they might be dying from the treatment faster than the cancer". It was medical alchemy because there were no better options or outcomes for these patients.

Fast forward and we have a very deep understanding of how these drugs work -- but it took many years to get there. In a way, alchemy can often transform from magic into science over time (and has done that in the past when the incentives are outcomes rather than raw understanding); I think the leaps forward in AI will engender a second wave of research on the mechanisms behind it that will propel us forward once we tap out on the "random progress" front.

Expand full comment

Shameless plug for my book(currently available for free) about Large Language Models, which actually uses Alchemy as an analogy a few times throughout the text, created mostly by a LLM that explains how it sees this whole field at this time also will help people become better at working with LLMs:

https://www.amazon.co.uk/dp/B0CK6RHM1Q?ref=ppx_pop_mob_ap_share

Expand full comment