Physics Professors Are Using AI Models as Physics Tutors
Once you realize this headline isn’t satire, you’re closer to understanding the future
Podcaster Dwarkesh Patel had an interesting guest recently (when doesn’t he). Adam Brown is a theoretical physicist at Stanford and a researcher at Google DeepMind (Blueshift team). You won’t find many people better positioned to talk about the relationship between physics and AI.
Halfway through their three-hour conversation, Brown said something that caught me off-guard. In hindsight, it isn’t all that surprising—as long as you’re aware of the latest advances—but it may still shock you coming from a physics university professor:
A lot of physics professors are using [large language models] just as personal tutors.1
Let’s break down this sentence, which is as profound as it is illustrative.
First, Brown, who probably knows plenty of physics professors, is not talking just about himself. He’s referring to a broader group we can describe as “people who know more about physics than anyone else in the world.”
Second, he’s referring to AI models as tutors. Of physics. For professors.
Why do laypeople feel so confident declaring that large language models are dumb, know nothing, make foolish mistakes, or engage in faulty reasoning—extrapolating anecdotal instances to define their entire understanding of this technology—but knowledgeable physicists are using them to learn physics?
Brown’s words should make you reconsider your idea of what AI is capable of. I shared them on my socials with this caption: “How do you respond to this statement says a lot about your biases toward AI.” I respond positively. How do you respond?
He later used chess as an analogy. He said what we have known to be true for decades and accept without reservation: Human chess grandmasters (the best human players in the world) are not only better than they were before because they play with computers but also because they’ve learned things about chess that no human knew before—from computers.
Computers were already tutors of human experts long before generative AI. Why do people find it weird that physicists are getting to the point where chess grandmasters were 10 years ago? Why do they freak out when they realize AI keeps conquering ground? (Some people don’t freak out at all; instead, they bury their heads in the sand, deluding themselves into thinking this is just a fad.)
Perhaps because we have a hard time withstanding such humiliating defeat in our territory. As I wrote back in July:
In the span of little more than 20 years, Computers went from “brute-force methods may solve chess” to “[it’s] impossible for humans to compete,” to “impossible for humans to help.” To help. . . .
It gets weirder the more we enter human territory—cognition and creativity. You don’t invite an F1 car to the 100m Olympic race. It feels intuitively fair. We grudgingly accepted our inferiority. But once we shrunk into our minds in shame, to hide from powers way beyond ours in the physical realm, we never expected to be besieged here as well. We have nowhere to go.
It is interesting, and perhaps a thought for some other time, that leading physicists like Adam Brown and leading mathematicians like Terence Tao are open-minded about the possibility of AI surpassing them and assisting them, like pupils, in their fields of expertise yet laypeople, who wouldn’t know how to read an equation either way, hide in fear or shake their heads in denial.
When I reviewed OpenAI's second o-series model, o3, I mentioned that AI will beat math, coding, and science like it beat chess, Go, poker, and other games before. Physics is more challenging than chess, yet it is relatively straightforward to design reinforcement mechanisms that enable an AI model to learn effectively. Both physics and math are formalizable domains, making them ideal targets for optimization.
AI will soon be superhuman there, too. That’s the kind of breakthrough o3 hints at; the kind Brown (and the other Brown) is foreshadowing.
What does it say about human intelligence—particularly that of people who dismiss AI models—that physicists are revisiting old physics and might soon be uncovering new physics with the help of machines? It says: GPUs go bittererrr.
REMINDER: The Christmas Special offer—20% off for life—runs from Dec 1st to Jan 1st. Lock in your annual subscription now for just $40/year (or the price of a cup of coffee a month). Starting Jan 1st, The Algorithmic Bridge will move to $10/month or $100/year (existing paid subs retain their current rates). If you’ve been thinking about upgrading, now’s the time.
He says they use AI to refresh or learn areas of physics other than their fields of expertise, but this doesn’t take away any significance from the fact in my opinion.
Stephen Wolfram, a renowned computational physicist, has recently mentioned on podcasts that he is using the LLMs, and he is perplexed why anyone wouldn't.
Agreed. I programmed an AI agent to do 90% of the deep research for my latest non-fiction book. Supplementing human intelligence is the effective use of human intelligence.