Perhaps AI Is Modern Alchemy. And That’s Not a Bad Thing
Let's be humble about what we ignore—but also about what we could know
One of the criticisms that might hurt AI the most is calling it unscientific.
This is tricky because although it doesn’t necessarily deny the value of what ChatGPT or Midjourney can do, it labels them in a derogatory way as if implicitly putting them at the lower end of the hierarchy of things that matter to humanity. Practical, yes, but what do they tell us about the world or ourselves? Nothing.
I’ve argued before that AI is better depicted as an aspiring science, which isn’t the same as unscientific; it emphasizes the goal and not the current state. It’s not nearly as bad either. Calling AI a hard science like physics or biology might be far-fetched—not even AI researchers would go that far—yet every discipline we respect today with almost religious worship began as a protoscience with aspirations at the level of the dubious methods at our disposal back then.
“Aspiring science” is a compliment. Saying that AI is not really a scientific endeavor is an attack on its credibility: it implies it doesn’t want to be. If that were the case, AI is no different than alchemy. That would be a big problem.
Is AI the alchemy of our times?
Alchemy is a charged word, an inheritance left by those who strove, not very honestly, to separate it definitively from chemistry. We intuitively assume that comparing any field of study to alchemy is equivalent to marking it as unserious—a discipline destined to add to the bag of pseudoscience, together with astrology, humorism, and the aether theories. But I don’t think the comparison between AI and alchemy is to be interpreted in such a superficial, and hurtful way; if I’m generous, I can read it as an attempt to call out the dubious methods researchers use rather than reject the real possibilities it has of becoming a science.
In that sense, I can accept the analogy: It’s undeniable that modern deep learning was conceived without a robust theoretical basis, milestones are achieved by trial and error, the preferred method to move forward is throwing data and compute into the algorithms, and working heuristics change every month. The lack of rigor, an almost complete absence of peer review, and the all-to-common untargeted experimentation to see what sticks (often without intention to falsify a predefined hypothesis) are quite different, for instance, from how experimental physics works. This is objectively resemblant to ancient alchemy.
On the other hand—and this hand weighs a lot—AI works. It has proven transversely useful, even though we don’t yet have a good intuition for where that utility is best applied. The non-existent theory behind the experimentation is not a sign that AI’s achievements are bogus (although we should be always careful in our interpretations) but quite the contrary actually: It’s impressive, and perhaps surprising, that we went so far without knowing why or how.
In this sense, AI is not like alchemy, which was both proto-scientific and rather useless as a practical discipline and has remained so to this day.
Alchemy or engineering science?
Thomas Krendl Gilbert, a machine ethicist, doesn’t think we should discard the analogy so fast; the parallels between AI and alchemy don’t end in their dubious methodologies. In an interview with VentureBeat’s Sharon Goldman he said that the people building AI right now “think that what they’re doing is magical,” and that intelligence is about training arbitrarily-built neural network models in bigger computers, which will eventually give way to superintelligence beyond ours.
Gilbert criticizes that AI builders don’t care about the mechanisms underlying the systems they create and dismisses their approach as one of mystic exploration without understanding, which can’t really bring any valuable insights.
Not everyone agrees with this interpretation, though. Meta’s Yann LeCun recently pushed back on the idea that AI has anything to do with alchemy (apparently not in response to Goldman’s article):
“[F]unny how some folks who think theory has some magical properties readily dismiss bona fide engineering and empirical science as alchemy. Blind trust in theoretical results that turned out to be irrelevant is a major reason why neural nets were dismissed between 1995 and 2010.”
Although it might seem that LeCun is on the opposite side of Gilbert in this discussion, he’s actually right in the middle. His stance can be summarized as “Let’s dismiss neither theory nor ‘engineering and empirical science.’ Both can be useful, both can be insightful.” I agree.
LeCun linked to a talk he gave four years ago called “The Epistemology of Deep Learning” where he argued, quite eloquently, that modern AI, including deep learning, is more akin to an engineering science (in contrast to a natural science, like physics or chemistry) than to alchemy. He made a direct reference to a NIPS 2017 talk by Google’s Ali Rahimi called “Machine learning has become alchemy,” and said, as an introduction to his talk, that Rahimi confused the two.
Engineering science, he explained, is but the science of inventing new artifacts using methods that include “intuition” or “tinkering”, and even “happenstance”, i.e., pretty much anything that helps you move forward. He said that, like in AI, it is a “creative act” more than a standardized analytical process. And that’s neither bad nor similar to alchemy, which never directly produced anything valuable.
He pointed out that theory, whose absence in modern AI seems to be the focus of the attacks, usually happens after the artifact has been invented, in a quest to understand, driven by curiosity. We first achieve the “what” and then look for the “how” and the “why”. Other times, experimentation and tinkering are entwined with our understanding of the forces and dynamics that make the artifacts work, creating a two-way channel of insights and valuable wisdom that requires both sources to function.
LeCun has a very strong point here: It’s different trying things and getting nowhere (alchemists never managed to turn lead into gold and never invented or discovered the philosopher’s stone) than trying different things and getting a bunch of results so surprising that the world feels to be facing a revolution. Now, we could debate for days about the scientific value of the results and insights we’ve gathered from building AI models but the engineering value is simply undeniable: From machine translation to object detection and recognition, to speech transcription, to game-playing AIs like AlphaZero and academically-useful systems like AlphaFold, to modern generative algorithms that blossomed into ChatGPT and GPT-4.
What silicon-based tools can do now is nothing like alchemists’ attempts at transmuting metals. They got nowhere; AI is getting everywhere.
Perhaps alchemy wasn’t so bad after all
After LeCun tweeted out his comments, Goldman published a second article with Gilbert’s response where he agreed with LeCun but noted that he’s missing a crucial factor in the analysis: He belongs to an older generation that cares about science whereas the current one doesn’t.
“[M]uch of the intellectual energy and funding today comes from people who … sincerely believe they are inaugurating a new era of consciousness facilitated by ‘superintelligent’ machines. That younger generation—many of whom work at LLM-focused companies like OpenAI or Anthropic, and a growing number of other startups—is far less motivated by theory and is not hung up on publicly defending its work as scientific…
Simply put, the claimants to science are no longer in control of how LLMs are designed, deployed, or talked about. The alchemists are now in charge.”
In short, AI is like alchemy not just because it’s done similarly but because, first, those who do it care just as little as alchemists did about the underlying principles that make it work, and second because they put their hopes in finding a philosopher’s stone that can—and will—redeem them by magically transforming silicon into intelligent, living beings.
Not only are they not proceeding scientifically but they don’t intend to.
Even if Gilbert is right and younger generations of AI researchers don’t really care that much about science or even try to ground engineering on theory ad hoc, that may not be such a strong argument against AI—even if alchemy has a pseudoscientific quality to it, we shouldn’t forget that it was a precursor to chemistry.
Contrary to popular belief, alchemy and chemistry weren’t always independent of each other. They approach the science of materials and physical transformations from radically distinct perspectives but share part of their history: early chemistry branched out of alchemy to solidify itself as a science. In this sense, comparing AI with alchemy can’t be an insult or a dismissal but arguably the exact opposite: AI is, in this view, the precursor of a new science yet to be unveiled and constructed.
Current AI researchers and AI scientists are possibly the fathers and mothers of something new that will finally earn a spot among the commonly respected disciplines (if this prediction makes you feel bitter because you truly despise how AI is done today, zoom out a little bit to see that it’s probably true for larger timescales).
Even if current generations don’t give a damn about science in the strict sense, it might not matter. Alchemists didn’t care about science because science wasn’t even a thing back then, yet chemistry sprung out of it anyway.
It could be that modern AI researchers are consciously and purposefully denying any scientific approach to AI—that would be a real problem—but they’re simply going with the flow of their heuristics and experimentation. They won’t stop anyone from trying to do the theory aspect of it (the sheer size and cost of the state-of-the-art systems is a real barrier, though, that’s why governments should devote resources to make modern AI a collective—national or international—effort, just like they do when it comes to physics or space exploration).
Is in this light that we should read Ilya Sutskever’s comments about AI as alchemy and his acceptance of the analogy. He’s not dismissive of science but hopeful of experimentation:
“… [W]e did not build the thing, what we build is a process which builds the thing. And that’s a very important distinction. We built the refinery, the alchemy, which takes the data and extracts its secrets into the neural network, the Philosopher’s Stones, maybe the alchemy process. But then the result is so mysterious, and you can study it for years.”
All pursuits are, a priori, worthy of our interest
But hear me out here, because I’m going even further. Even if alchemy never transformed into chemistry by means of soaking it deeply into rigor and objective measures, it was still worth it!
Think about Isaac Newton, who, with perhaps the exception of Albert Einstein, is widely considered to be the greatest physicist—and scientist overall—ever to live. He was also deeply passionate and enthusiastic about alchemy. Newton, the inventor of calculus and discoverer of the laws of gravitation was also into the now-harshly-dismissed-as-pseudoscientific-trash field that is alchemy.
The person who has historically contributed the most to modern science is, at the same time, likely the most famous worshiper of the best-known anti-scientific discipline.
But, why? Can we even make sense of such an incoherent choice of interests?
Well, the answer is actually rather simple. We don’t even need to ask GPT-Newton: From his pre-scientific point of view, it wasn’t at all clear that physics, and not alchemy, was the safe bet. He just moved forward with whatever means he had, pursuing the interests he had. He tried a bunch of things, driven by his endless curiosity and sublime intellect. Some of his endeavors worked out to be deeply relevant to our understanding of the universe. Others, which he seems to have had in even higher esteem, are now seen as a stain on his otherwise impeccable curriculum.
But this classification that appears to us unchallengeable, almost trivial, is only meaningful with the benefit of hindsight. We now know that physics and mathematics are useful and insightful and alchemy isn’t. For Newton, both were worth pursuing. He was playing a game he couldn’t understand as well as we do today, and thanks to his erroneous moves, we now can dismiss alchemy as unscientific.
He was humble enough, which is a notable thing to say about Newton, to keep his ignorance from ruling out something he couldn’t know was pointless.
His lesson—nothing is unworthy of pursuit a priori—provides an interesting contrast with some people’s view of AI researchers. They are often qualified as arrogant for using metaphors that anthropomorphize AI systems as intelligent, or capable of reasoning or understanding or holding beliefs that appear to be crazy today, like the future existence of a superintelligence several times smarter than humanity. And that’s a defensible criticism—we should be humble about what we don’t know.
They are, however, the humblest of all when it comes to not ruling out avenues of exploration and investigation—more or less scientific—that could eventually yield foundational insights for our future.
For most practical purposes, from the average layman's perspective, AI might as well be voodoo. As long as it helps them do what they need to do somewhat reliably.
Which is back to your point: Unlike alchemy, AI works. Hallucinations aside, it does deliver on many fronts.
But it certainly wouldn't hurt for us to get a better grip on the processes inside the black box, that's for sure.
You might find this an interesting read "Reclaiming AI as a theoretical tool for cognitive science" (https://psyarxiv.com/4cbuv/)
I quote: "One meaning of ‘AI’ that seems often forgotten these days is one that played a crucial role in the birth of cognitive science as an interdiscipline in the 1970s and ’80s. Back then, the term ‘AI’ was also used to refer to the aim of using computational tools to develop theories of natural cognition.
Ergo: AI historically was considered (part of) science and nothing is holding us back from considering it science now.