41 Comments
User's avatar
Philippe Delanghe's avatar

very refreshing and true. Where are the quiet ones ? I like François Chollet, who does not seem to be in the hype wagon. I'm disappointed that Sam Altman is not talking to me however :-)

Expand full comment
Alberto Romero's avatar

Chollet is one of my go-to sources for alternative takes on the ongoings of AI. He tends to throw common sense on top of the ubiquitous hype and anti-hype. Good choice.

Expand full comment
David Watts's avatar

A great reminder that AI isn’t just about hype or fear; it’s about understanding, adapting, and staying curious. The point about AI tools being cheap to try but having huge potential upside really hit home. Also, the idea that AI is already here, woven into our daily lives, is something we often overlook.

Expand full comment
YH's avatar

Hi. Could you explain more on what is being meant by generative is in its final stage? Thanks

Expand full comment
Dido Miranda's avatar

Thanks for your response. :)

I learned about you ... only today.

Am glad I did. :)

Expand full comment
Kaya Aykut's avatar

great list!

re. 14. Doesn't this assume lack of some big adoption inflexion (whose ROI to big players' massive capex is enough to keep prices low)? OR, do you mean there will be an intermediary expensive period prior to adoption inflexion, and then cheap again? OR, it's just gonna be expensive because... why?

re. 24. Legacy is better because it makes one a real longtermist (vs. money, that ultimately makes one a shortermist as death approaches, or worse, a hedonist asshole). Then again, I don't see how someone who seems to thrice and speak once, and does so so strategically (longer way of saying sneaky :-) ) can be someone who cares about legacy more than anything else. Caring about legacy requires truthfulness about one's intentions as a long term strategy. Some examples: Ray Dalio, John Meiersheimer, Jeff Sacks...

re. 26: I hope a whole article on this is coming.

Expand full comment
Alberto Romero's avatar

re 14: https://www.thealgorithmicbridge.com/p/the-ai-rich-and-the-ai-poor (the entire reasoning behind that sentence is explained in this post)

re 24: What do you mean Altman isn't clear about his intentions? He's shady and sneaky about his procedure and the methods he uses to achieve his goals, but his goals are quite clear

re 26: In the works!!

Expand full comment
Kaya Aykut's avatar

Do you mean developing AGI (*) being Altman’s goal? I don’t know of any other goal he shared publicly (excuse my ignorance). If we are talking about our deductions about his goals then there exactly is the problem: we only get to guess, never know, about what one of the guys potentially most impactful on humanity in history wants or aims for.

Remembering one of the reasons of the last November’s drama at OpenAI was his having told different things to different board members, I don’t think that’s a one off: in my experience one is either comfortable with misrepresenting the “Truth” (and at these levels the most common way is to have different versions of “Truth” for different audiences) or isn’t.

(*) I’m not even touching the lack of definition of AGI giving him all the leeway he needs to come up with new versions of truth about his goals.

Expand full comment
Alberto Romero's avatar

Sorry, forgot about your question! AGI is what he says in public, but reading in between the lines, you know he wants two things for which AGI is a means: (1) A post scarcity world. (2) Being the enabler of such world. He's after his legacy first and foremost. I don't think he has bad intentions but the means he's willing to use to get there would make most people uncomfortable - to say the least.

Expand full comment
Andrew Sniderman 🕷️'s avatar

Good stuff!

Without the gen bit, AI gains continue largely unnoticed and linear. Compute gets better. With gen AI came the hype and the $$ and now the gains feel exponential.

I'd really like to read more about the amazing stuff non Gen AI is up too. #18.

Expand full comment
imthinkingthethoughts's avatar

These points are both full of experience and wisdom. Thank you

Expand full comment
Alberto Romero's avatar

Thank you Riley!

Expand full comment
Daniel Nest's avatar

Awesome list.

"Some people saw coming a decade ago what’s happening today; follow them and you’ll see (part of) the future" - I'd love to hear your take / recommendations of a few concrete people here. I have vague ideas but would love your judgement on this. I'd want to keep my signal-to-noise ratio high, so knowing a few specific people would be super helpful.

Expand full comment
Alberto Romero's avatar

Another reader asked me privately for this exact thing so I will copy here the answer I gave him (I could compile a larger list but these are some of the most famous names, you surely know them all):

The heads of the main labs (OpenAI, DeepMind, Anthropic) also Ilya Sutskever, Andrej Karpathy, the godfathers (Geoff Hinton, Yann LeCun, Yoshua Bengio) also Jurgen Schmidhuber, Fei Fei Li, Andrew Ng, and even Eliezer Yudkowsky and Nick Bostrom. Surely many more.

Expand full comment
Daniel Nest's avatar

Thanks Alberto. You're right, they're all familiar names to me (except Jurgen Schmidhuber, actually - will have to research more).

And that's quite an eclectic list in terms of the breadth of opinions/visions of the future and p(doom). Some of them think scaling the current paradigms is all you need, while some of them think the LLM architecture is an offramp. And so on.

Then again, you never said you were talking about people who share the same forecasts about the future, so that checks out.

Expand full comment
imthinkingthethoughts's avatar

Second this!

Expand full comment
Haxion's avatar

Can you elaborate on 26? Almost all the major innovations in AI were invented in the US first, and while Chinese companies are doing very impressive work on balance I’d say the US is still ahead. Are you referring to the need to construct tons of additional power generation or something like that? As I do agree the US political system is very sclerotic and dysfunctional and that has real consequences in infrastructure.

Expand full comment
Alberto Romero's avatar

Exactly, I am referring to the underlying infrastructure. As you say, if China is behind in anything right now, it is in innovation. The US is still ahead. But China produces more, manufactures more, builds more, etc. Like a lot more now.

Expand full comment
Haxion's avatar

I guess I’d respectfully disagree here. The major innovation that you highlight as moving beyond generative AI, computational reasoning, is incredibly expensive in power cost. Looking at the results from DeepMind and OpenAI o1 (I’m sure Anthropic isn’t far behind and will have something similar out soon), it’s pretty easy to infer that the LLMs are using chain of thought or whatever as a kind of tree search or CSP solver. And for formal reasoning, all those algorithms scale exponentially in problem size; see for example the o1 plot that shows a linear increase in accuracy with an exponential increase in runtime (on whatever benchmark, it’s not clear). And no doubt the systems will get more efficient with more training and parameter tuning and so forth, but barring some truly insane breakthrough, close to proving P=NP, exponential scaling is going to remain. And in that limit walls get hit pretty quickly, so a 10x increase in available power doesn’t necessarily translate to much larger problems that can be solved. Granted you could solve more small problems at a time, of course, but I’m not sure I would call that winning the AI race. To my mind real victories will come from algorithmic efficiency breakthroughs, not infrastructure.

Expand full comment
Alberto Romero's avatar

Oh, but you're focusing too narrowly on algorithmic breakthroughs and I think to win the AI race (admittedly a vague thing to say) you need much more than that, like huge datacenters, top notch GPUs and chip fabs, the best engineers, etc. The US wins over China in most of those *for now*. That's my point - everything points to China eventually surpassing the US also on this front. (I will admit also that this is possibly the most controversial of my statements on this post - good catch!)

Expand full comment
Kaya Aykut's avatar

*Supposedly*, market capitalism's biggest advantage over more centrally controlled systems is the optimal resource allocation game. Assuming economics & ROI of AI work out for investors (which is not a trivial assumption given capex now, scaled ROI way later) do you think China would still come out on top in the long term? #14 also suggests you have some additional thoughts about adoption & ROI curves.

Expand full comment
Alberto Romero's avatar

Yeah but it's funny that the US, supposedly more capitalistic and pro free market than China, is more focused on culture wars while China dominates the world with sheer commercial prowess (no wars, no bullets). So, given the numbers we've been seeing recently about China, there might be some flaw in your reasoning (whose premise I accept in principle!)

Expand full comment
Kaya Aykut's avatar

Agreed, on the same page. Ultimately it comes down to the tradeoff between the detractor power of culture wars (=noise in allocation problem) from efficient allocation vs. the efficiency of hybrid (*) allocation systems ~without said noise.

(*) hard to suggest China allocates centrally..

Expand full comment
Haxion's avatar

Fair enough! I focus on power and algorithms because new data centers in the US come from private investment and are being built at pretty crazy rates. And coming back to power, physics gives us a hard constraint: most of the gains in performance per watt come from making transistors smaller, and we are running into limits from quantum mechanics there. We can keep making chips denser through more vertical layers or more die area (each GPU generation is physically larger than the last, after all), but that doesn’t reduce power consumption. From conversations with industry veterans I personally guess another factor of 2-4 is within reach in the remainder of the decade, but that’s really it barring something like moving away from silicon entirely. Gains are possible from more specialized circuitry, though even there it’s not clear much further we can go—an H100 has the self-attention mechanism already hard coded, I think!

Expand full comment
sean pan's avatar

Alberto, do you know if you have 30 minutes to have a call with us on a podcast? Let me know if you have a good email.

Expand full comment
Alberto Romero's avatar

Hey Sean, you can contact me here: alber dot romgar at gmail dot com

Expand full comment
sean pan's avatar

Can you confirm if you got an email?

Expand full comment
sean pan's avatar

I sent an email, let me know if you didn't get anything.

Expand full comment
Alberto Romero's avatar

Sorry Sean, I didn't get it no.

Expand full comment
sean pan's avatar

I sent again! Thanks

Expand full comment
Leslie Oosten's avatar

Very good.

Expand full comment
Angus Mclellan's avatar

Was this written wholly by you?

Expand full comment
Alberto Romero's avatar

Yes. Pretty clearly I must say haha. If you think any of this could be AI, I suggest you revisit your priors!

Expand full comment
Angus Mclellan's avatar

Has AI passed the Turing test ?

Expand full comment
Angus Mclellan's avatar

I am kind of new to AI , I did study computer aided engineering but that was a long time ago

Expand full comment
Miguel Gómez's avatar

Could you deepen on the 8th point?

Expand full comment
Alberto Romero's avatar

Sure. What do you want to know exactly?

Expand full comment
Sangeet Paul Choudary's avatar

#3 is one of the most common fallacies about AI. It is simply not true.

There are 8 fallacies embedded in that argument explained in full detail here: https://platforms.substack.com/p/the-many-fallacies-of-ai-wont-take

Expand full comment
Idol Thoughts's avatar

Awesome 😎👍

Great post!

Expand full comment
User's avatar
Comment deleted
Oct 6
Comment deleted
Expand full comment
Alberto Romero's avatar

Really? I've never seen it written that way before. Just the part before the semicolon, which is *why* I added the part after the semicolon.

Expand full comment