How Smart Is AI, Really? (Or: Why xAI May Be Good News)
AI has managed to put the cart before the horse—it is time we find a horse
One of the most challenging unsolved riddles in artificial intelligence is paradoxical: Deep learning, the leading AI paradigm for more than a decade, has proven so surprisingly successful that it has earned the adjective “unreasonably effective.”
It is effective because it works well across many problems; it is unreasonably so because the main ideas are simple yet we don’t know how or why they work. Deep learning is as powerful as it is baffling—the closest we have to magic in the modern world; like a Promethean gift from the gods.
“Effective beyond reason” is the exact same weird compliment that theoretical physicist Eugene Wigner used in 1960 to describe the language of the universe: Mathematics. We don't often question why the cosmos—from the vast distant galaxies to the omnipresent infinitesimal quarks—talks in numbers, equations, and symbols; mathematics’ unreasonable effectiveness as a tool for us to describe reality with sublime simplicity is a gift we accept and embrace and admire.
Maybe we should do the same with deep learning’s unexplainable achievements. Maybe they’re an indicative clue that, as OpenAI’s Greg Brockman recently said, there’s “something deeply correct about the underlying ideas” behind neural networks. Maybe deep artificial nets, like mathematics, are windows to the secrets that would otherwise lie just out of our sight. Maybe, like mathematics, deep learning is the universe talking to us in a language we don’t yet speak.
As beautiful and appealing as this hypothesis is, we can’t even begin to test it. Whereas mathematics is the foundation from which we grow the edifice of our knowledge, deep learning is severely disconnected from it. As a largely empirical discipline based on loose biomimicry, brittle heuristics, and trial-and-error procedures, its unreasonable effectiveness couldn’t be further in nature from that of mathematics.
We don’t know what makes neural networks so versatile and powerful but we also don’t know how to assess their extensive and unprecedented capabilities, which reveals a conundrum: Deep learning is so effective that it has reached well beyond our scientific means to evaluate where that seemingly limitless effectiveness stops.
This leads to a problematic corollary: Because deep learning works so well but we lack both the mathematical groundwork and the scientific tools to know how or why, it is rather easy for us to assume it works better than it does.
The following is the story of why these questions are now more important than ever. An improbable story that oddly connects two apparently distant dots: an epistemological Science essay casting doubt on our ability to know how smart AI systems are and the ridiculously ambitious ontological quest of unveiling the mysteries of the universe as the central goal of Elon Musk's brand-new company, xAI.