14 Comments

Kudos for the quick turnaround on this in-depth piece about GPT-4o. It's impressive how you've captured the essence of this release, highlighting not just the technological advancements, but also the economic and societal implications, with such speed and clarity. Your ability to distill complex developments into an engaging narrative stands out. Once again.

Expand full comment

Thanks Pascal 🙏🏻🙏🏻🙏🏻

Expand full comment

Here here!!

Expand full comment

Your crystal ball prediction from this last Saturday was fantastic Alberto! Now a couple of dumb questions: 1). Will this mean that everyone can access the GPT store via either 4 or 4o? 2) Will this multimodality change the very nature of the tools that we can create on the GPT store? The first question has great bearing on my new leaf fledgling business, which is to tap the power of custom GPTs. I always worried about perspective clients barking at the $20 a month. Is that now a done deal?

Expand full comment

Good question Paul! I'm not sure. They didn't say much about the store during the demo. But if 1 is is positive then 2 is as well. Basically, GPT-4o isn't the same model as GPT-4. They've called them similarly but Omni is a multimodal end-to-end, which means it can process all modes in the same query without problem (at least not more problem than GPT-4 had with just text or just images). It's a much more powerful kind of multimodality than GPT-4V

Expand full comment

Thanks for this. Fun read going home from work.

Expand full comment

Wow. Huge progress. I just hope they do shake hands with Apple and fold it into Siri, which is currently the Achilles heel of Apple's ecosystem.

Expand full comment

The model GPT-4o being available for free is a fantastic feature; it allows everyone to use it.

Expand full comment

Interesting sum up - we will see if we get yet more hyperbole when Meta launches soon. The first of those 3 tweets you've shown I saw in a group of 30 people yesterday. The point took so long to build I'd found something else to do before 1 minute was up so missed the conclusion.

Expand full comment

Amazing and clarifying read as always. Thank you!

Expand full comment

"In just a brief 25-minute live event they’ve changed the landscape completely." Calm down now. Voice isn't even available because Open AI didn't test how real people would use it. And as you say, it's not available yet. I just think Open AI needs to walk more, talk less.

Expand full comment

Just wait for it to be. But that particular sentence you quoted is because of something else: OpenAI is making free a model that's one level above any other. If that doesn't change the business landscape, I don't know what does.

Expand full comment

I've been interacting with 4o for an hour now, it's available on my Android and on my Mac browser (France, no VPN). And Voice is available on my Android app (I'm in love with this voice since Day One). I guess it's the previous Voice version because there's still latency. But Alberto is right: the landscape changed suddenly 4 hours ago, and for the better.

Expand full comment

Says TEchcrunch, "Voice isn’t a part of the GPT-4o API for all customers at present. OpenAI, citing the risk of misuse, says that it plans to first launch support for GPT-4o’s new audio capabilities to “a small group of trusted partners” in the coming weeks." I guess you are a trusted partner. The rest of us are sidelined while OpenAI tests on you.

Expand full comment