I opened my brand-new edition of Anna Karenina yesterday. To my dismay, I could only find some ink-scribbled symbols on the page. I turned a few more to see if this was a bad-taste joke or a serious mistake. Nothing. More weird symbols imprinted on the paper. I furiously closed the book and wrote the most damning review ever on Amazon: "Anna Karenina, a book like any other."
When I told people the story today they somehow kept insisting Anna Karenina is a masterpiece. Tolstoy is a genius, the characters are complex, Anna’s arc is tragic, and so and so. But I have proof that all of this is nonsense. Because at the end of the day, what is Anna Karenina? Ink on paper. That’s all. A physical artifact of no particular significance, mechanically produced like the rest. A book like any other.
This ridiculous take is what I hear when people say that “AI is just linear algebra” or some of the other reductionist analogies they recite. Unfortunately, they’re late. This cheap mic-drop was refuted as a rhetorical fallacy long ago by the late philosopher Daniel Dennett. A deepity, as he called it, is a statement that has two interpretations, one of them trivially true but uninformative and the other profound but wrong or meaningless.
“AI is just linear algebra” fits that description. In one sense it’s trivially true: at the most elemental level, ChatGPT is matrix multiplications. But trying to convey the entire meaning of ChatGPT as “a bunch of matrix multiplications” is not any more true (or false) than saying that “a human brain is a bunch of neurons firing” or, to take my example above, that “a book is some ink on paper.” It’s true but uninformative.
In another sense, it’s profound: If AI is nothing more than linear algebra, then there’s no emergent intelligence, no meaningful complexity, and thus no real reason to study it beyond its mathematical underpinnings. But that’s where the trick happens. That’s false. We don’t know what does or doesn’t emerge when the tiniest elements interact at an unfathomable scale. And please, don’t tell me you can fathom one trillion artificial neurons interacting with one another.
(If you’re willing to challenge this undeniable fact of nature, I encourage you to read this famous paper entitled “More Is Different” that the late theoretical physicist and Nobel laureate Philip W. Anderson published in 1972. Still as fresh and valuable as it was fifty years ago.)
The goal of such statements is to conflate the trivially true interpretation with the profoundly misleading one, making it feel like it’s profoundly true: “Oh, wow, so ChatGPT is just matrix multiplications. What a parrot.”
I suspect this attitude stems partly from a desire to downplay AI publicly and dismiss it as unimportant in private. But it also serves another purpose: by reducing AI to a shallow platitude, it no longer warrants deeper scrutiny. This way, you don’t have to confront the possibility that something lies beneath the surface, something you might not be able to emotionally, intellectually, or even spiritually endure.
Biologist Michael Levin tried to disentangle this confusion (to put it charitably) by alluding to the emergent complexity of the human brain and how you can’t describe consciousness or intelligence by talking about chemistry reactions or quantum foam:
Some people who write these AIs will say ‘Well, I make them, I know what they do, it’s just linear algebra.’ First of all, you don’t even know what bubble sort does. Second of all, it’s just linear algebra and you’re just chemistry . . .
But of course you’re not. So why are we fairly comfortable to say that the story of biochemistry is not the story of the human mind but you somehow think that the story of algorithms . . . is the story of these things that you’re now making?
AI is not like human intelligence.
It may not be like any intelligence we know.
It may not even fit the word “intelligence” at all.
But it doesn’t matter.
Because whatever it is, we know what it isn’t: just linear algebra.
Thank you for putting into words why “AI is just linear algebra,” or “It’s just a sophisticated token prediction machine,” always felt so wrong
Thanks Alberto for this insightful view! Our brain as well as artificial networks are capable of nonlinear interactions of distinct units, and this indeed can cause emergent behavior.