Sam Altman, OpenAI CEO, has a new blog post out.
He occasionally publishes one of these, when he perceives a paradigm shift, to share his personal stance apart from his work at OpenAI. One recent example from 2021 is “Moore’s Law for Everything,” where he described an incoming era of increasing wealth and abundance thanks to AI advances.1
This one is about The Intelligence Age. Altman analogizes what he sees as a new stage for humanity with the Industrial Age and the Agricultural Age before that. The Intelligence Age, as he presents it, will allow us to manage the yet unfathomable power of AI assistants, AI teachers, and AI partners. The road there will be “paved with compute, energy, and human will,” he says.
Leaving the hyperbole aside, there’s more than one way to read Altman’s words. I will comment on what I perceive at three different levels:
What he says about the state of AI and technology.
What he says about his business and interested investors.
What he says about the hidden sociopolitical discourse.
Deep learning worked
On a first skimming pass, the blog post reads like your typical endorsement of deep learning: it works well, it will change the world for the better, and it will achieve AI’s ultimate goals in our lifetimes. The following excerpt lays out this message:
This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there.
How did we get to the doorstep of the next leap in prosperity?
In three words: deep learning worked.
In 15 words: deep learning worked, got predictably better with scale, and we dedicated increasing resources to it.
From what I’ve seen, people have stuck to this. Superintelligence in “a few thousand days” (exclamation point included)? That sounds crazy. Well, yeah, but not crazier than previous statements by Altman or OpenAI at large:
Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.
A few thousand days, say four thousand, is already 10 years. In line with previous predictions, as disparate as they may seem today.2
Altman’s point that “deep learning worked” is more interesting to me. Because he’s so obviously right and yet manages to fumble the cleanliness of his assertion by adding, in the next paragraphs, that it works “to a shocking degree of precision” and that they (I interpret he means AI companies) “will solve the remaining problems.”
Given that tricking the latest AI model is an international sport—with the glaringly funniest wrong results making the rounds across social media—is weird that he tries to sell here the idea that deep learning is extraordinary for its precision precisely. That’s simply not true.3 I understand the intention with which he says it—how can a computer even talk, right?—but he will lose credibility points for some valuable readers over this detail. About solving the remaining problems—who’s Altman without a bit of unjustified overhype?
Anyway, I like this part: “deep learning worked, got predictably better with scale, and we dedicated increasing resources to it.”
That’s objectively true. And whether you like it or not, it’s in big part OpenAI’s deeds that sent us collectively down this road. I can’t decide yet if that’s a good thing (on the one hand, I’m using ChatGPT a lot; on the other, generative AI is catastrophically polluting the information ecosystem).4
What I do know, looking far back and deep into history, is that soon we won’t have to decide. Because we seldom inquire about the roads we don’t take.
An investor pitch
That’s what most people see in a first skimming pass. Superintelligence and deep learning. Then you take a closer look to read in between the lines and see what most people with a skeptical mindset see: this is an investor pitch in the shape of a philosophical essay.
Sam Altman is not talking to you or me. He’s instead talking to those who have money—billions of dollars to spare—so he can continue to fund his grand (no acrimony) projects. I mean, they’re as grand as they get: an international network of giant supercomputers, the AI device replacement of the iPhone, AGI and superintelligence, working fusion energy plants… He’s up there at Elon Musk’s tier in both ambition and breadth of interests. Make of that comparison what you will.
There are no relevant excerpts here because the entire blog post exudes that juicy mix of techno-optimism and high-agency that world-class investors seek. So Altman seems to be seeking investors by making them seek him. That is, of course, if his blog post is actually a pitch disguised as a prophecy.
Well, I’m not sure about this reading.
OpenAI has investors to spare. The Information reports that “OpenAI is closing in on a new financing [round] of between $5 billion and $7 billion,” and they’ve “asked investors to fork over a minimum of $250 million.” In another piece, they report that Anthropic, OpenAI’s main competitor besides Google DeepMind, “is attempting to draft off investor interest in OpenAI’s latest round.” The reason? There are too many interested investors.5
Does Altman need more shareholders and that’s why he wrote this blog post after he’s reportedly ensured the interest of the likes of Microsoft, Nvidia, Apple, the UAE, Thrive Capital, and Tiger Global? Not likely.