5 Comments
Apr 25Liked by Alberto Romero

Phew! It took me several separate reading sessions, but I'm finally done.

Thanks, Alberto, for this colossal piece. (I can only imagine how much effort and time this must've taken.)

I certainly have a newfound level of understanding of the capabilities and limitations of the current TPA / LLM paradigm as well as the overall landscape of models and challenges involved.

It really feels like the next paradigm-shifting leap will be "true" independent agents, while, if your assumptions are correct, GPT-5 will "just" be a much more capable model within the current constraints. (Which doesn't make it any less exciting.)

I'll probably need a bit more time to re-read and process many of the sections and concepts, but I just wanted to let you know that I appreciate the work you put into making this available for us!

Expand full comment

Great article. Thorough and interesting. I learned a lot and my "go read this list" just got even bigger! Thanks.

Expand full comment
Apr 25Liked by Alberto Romero

Very helpful. Love the long-form deep dive. Really helps me to think about all of this by pulling it together with plenty of context. I'd also appreciate hot takes predicting impact throughout society if this class of models proves to be along the lines of what you've outlined here.

Expand full comment

Great piece. I really like the Agents section! It provided a great primer on AI agents and the current lack of it across AI models.

"OpenAI is trying to make GPT-5 more reliable and safe with heavy guardrailing (RLHF), testing, and red-teaming."

This might be the case but they are definitely not transparent about it like Anthropic. https://www.anthropic.com/research#alignment I won't be as bold as to claim that absence of evidence means it is not being done but this recent development brings no confidence to the claim above. https://www.perplexity.ai/page/OpenAI-superalignment-team-n34Qq4AMSOamwHbdw1kMxg

Do you think OpenAI's march to AGI will be stopped by things that are extremely concerning like this? https://arxiv.org/abs/2401.05566

"If that happens, Gemini 1.5, Claude 3, and Llama 3 will fall back into discoursive obscurity and OpenAI will breathe easy once again."

I am okay using less capable models if they are being built at AI labs with a more transparent testing and safety framework that does alignment and societal impact work which I find lacking at OpenAI. Also as a knowledge worker, I am less interested in a AGI model that is being built to replace me and prefer the models that are meant to enhance my skills and life (referring to perplexity AI and inflection Pi's models).

Expand full comment

Great work, i hope the writing was as enjoyable as the reading was for me!

OpenAI employee Roon once said in an interview that he believes the labs are internally over a year ahead of whats beeing released publicly.

I'm inclined to believe this but it seems unplausible to keep this up in light of the race dynamic that the labs are in atm.

One idea would be that perhaps they train larger models in advance, but these are too pricy to release in inference costs, so they are best kept internally to perhaps help your empoyees code / generate synthetic data.

Also this makes no sense to me on the training compute side of things..

Wondering if you have any thoughts on this!

Expand full comment