OpenAI Rules the Changes But Meta Changes the Rules
An analysis on Meta’s master plan and OpenAI’s masterpiece
Meta has put the entire AI startup ecosystem against the ropes. They’ve released the two smaller versions of the Llama 3 family (8B and 70B-parameter dense models) and have given us a glimpse at the large version, a 405B dense model that although still training, is already showing GPT-4-level eval scores. Critically, Llama 3 is open access. A GPT-4-class model that’s available for anyone is, ironically, OpenAI’s primary threat as the leader of an industry that’s barely turning a profit and whose members’ entire business model is selling access to private models via API. It seems there was never a moat, but, if there was any, it surely wasn’t theirs.
I focus this analysis on a “Meta vs OpenAI” dichotomy because, let’s be honest, most OpenAI-type startups, the likes of Mistral and Cohere, never stood a chance. It’s OpenAI who’s ruling the changes and deciding the direction. I acknowledge the framing is somewhat forced anyway because Google and Anthropic are worthy rivals. They’re however each playing their own game, compromised by their circumstances.
Google funded the field of generative AI, which, in a dark twist of destiny, grew to become a dangerous threat to the ads-based search business it relies on. Gemini 1.5 Ultra could be a breakthrough but it’s undeniable Google’s hesitant to innovate recklessly, as if walking a tightrope. Anthropic is constrained by its identity as AI safety’s outpost. They’ve been careful not to advance capabilities beyond the frontier which is, by definition, the opposite of what leaders do.
Neither Google nor Anthropic is incentivized to outplay OpenAI by changing the rules. But Meta is. That’s exactly what they’re doing (and were trying to do, even before LLaMA leaked “by mistake”). No one cared then because Mark Zuckerberg was embroiled in metaverse failures and continuous backlash against the twisted incentives ingrained in the essence of his social media platforms. The Llama 3 family—especially after Zuck’s public determination to make AI Meta’s priority—makes the world pay attention. Meta matters now and will use Llama 3 to redefine how the story unfolds—that’s the sheer power of a GPT-4-class open-access model.
Meta’s open AI is what OpenAI should’ve been
Don’t interpret my praise of Meta’s efforts as an admission of OpenAI’s frailty. They’re threatened but that’s not the same thing as defeated. Far from it. Let’s recall that GPT-4, which is still above Llama 3 405B in most benchmarks, is one and a half years old. The most recent GPT-4 version is a month old, true, but let’s not compare an iterative update with an entirely new architecture that leverages the latest research on synthetic data and algorithmic improvements as well as the lessons from past mistakes.
I don’t think OpenAI is surprised by Llama 3 (or worried, for that matter). They surely knew the period during which their leadership had even a chance at existing was tiny, merely a function of the amount of leverage they could exploit from the GPT-3’s technical advances and ChatGPT’s popular interest. They did it successfully. However, once tech giants with two orders of magnitude more revenue caught up (and they did), OpenAI’s long-lasting prominence became a matter of mercy. That is, setting aside the absence of surprise or concern, the new annoying reality for OpenAI (even if Sam Altman wants to appear nonchalant, devoting his airtime to trolling the audience).
OpenAI had to prepare for Google’s attempts to surpass them in tandem with the DeepMind AI powerhouse. They did it. They had to recover from the great divide that led to Anthropic’s founding. They did it. Now they have to fight back against Meta’s attempts to leverage the unlimited power of open-source to commoditize the great software technology of our times—large AI models. A new day for OpenAI, a new monster to slay, except this time it’s all the monsters at once.
Zuckerberg implied in the Dwarkesh Patel podcast that he doesn’t think AI models are going to be standalone products but advanced software infrastructure powering other things instead (in Meta’s case the social aspects of online content and connection, to use a generous description of what Meta does). As Zuck told Patel, “We have a long history of open-sourcing software; we don’t tend to open source our product [emphasis mine].” If Meta is open-sourcing its best AI stuff it’s because they don’t consider Llama 3 their product but a complement to be commoditized.
I said OpenAI doesn’t seem worried. The reason, rumors have it, starts with GPT and ends with 5. But unless OpenAI enacts a miracle worthy of the promise of the scaling laws, it may not be enough. AI companies have proved building large models is a solved technological problem (a different question is the applicability problem), so Google, Meta, and Anthropic won’t sit around waiting for the next timely leapfrogging by OpenAI. There are only two soft moats left: money and talent. OpenAI isn’t leading in the former and may not be for long in the latter now that the novelty factor is over and Google and Meta have attractive offerings with Gemini and Llama. If GPT-5 turns out amazing, OpenAI’s apparent indifference to Meta’s moves is warranted, but so far it’s looking worse than ever for them.
Meta is trying to suffocate any potential competition under the weight of their ambitions and, contrary to Google, they can do it cleanly—generative AI models are substitutes for search engines but they can enhance content creation and social media seamlessly. To that we should add that helping the open-source community is a strong ideological position. It doesn’t hurt either that these maneuvers are making Zuck look cool in contrast to all the others, especially Sundar Pichai who is, arguably, Zuckerberg’s closest competitor and, not-so-arguably, the uncoolest of the bunch (perhaps undeservedly for Google as a whole—but when is popular opinion fair and rational?).
It can always be the case that Zuckerberg’s bet is misplaced; that AI doesn’t end up being another software cog in a larger system but the entire system itself (that’s more akin to OpenAI’s position with all its grandiloquent discoursing about AGI, silicon gods, and post-scarcity utopia). Zuckerberg knows a materialization of this alternative future is possible and would be “trickier” for Meta:
There is one world where maybe … the model ends up being more of the product itself. I think it’s a trickier economic calculation then, whether you open source that. You are commoditizing yourself then a lot. But from what I can see so far, it doesn’t seem like we’re in that zone.
In a way, he never thought we’d be in that zone at all so him believing we aren’t now isn’t newsworthy. Is his understanding of the subject limited? Does he have a plan in case his commoditizing of AI models doesn’t yield the expected value for Meta’s shareholders (killing OpenAI may not be a convincing rationale for those with millions at stake)? Or is he just playing the game under his own rules and predictions all the while he tries to smash anyone else who dares set up as standard a different set of beliefs and timelines? Even if his forecasts are mistaken, he may win anyway by forcing AI startups’ margins down to zero while he positively redirects AI-generated content to his social media business. If he executes, it’s truly a master plan.
Some Google employee warned of this a while ago in terms so perceptive of what eventually happened that reading it again now feels eerily prescient:
Paradoxically, the one clear winner in all of this is Meta. Because the leaked model was theirs [LLaMA], they have effectively garnered an entire planet’s worth of free labor. Since most open source innovation is happening on top of their architecture, there is nothing stopping them from directly incorporating it into their products. The value of owning the ecosystem cannot be overstated.
The masterpiece starts with GPT and ends with 5
That is, at least, the official Meta-favoring narrative. The reasoning is strong and Meta’s motivations—lest someone believes Zuckerberg is a soul of charity or an open-source ideologue—are clearly defined (even if not to everyone’s liking). But because the mob moves by vibes and OpenAI’s are off, they don’t stop to consider what’s left for Altman’s people to defend themselves. Meta made its move, now it’s time for the others to act and react.
The first point in favor of API sellers (not just OpenAI) is that a super large dense model isn’t really what individual customers want. They’re delighted with simple access to the ChatGPT API and playground. What about those who pay $20/month? Wouldn’t they be willing to spend their money elsewhere, like a local, private, personalized model? Not that many people pay for ChatGPT but among those who do, how many could realistically download Llama 3 405B, fit it into the GPU memory of their servers, and then spend hundreds of dollars per month on inferences? If you can’t get the largest Llama 3 locally, it just doesn’t make sense to switch from your existing API provider.
Some customers who are sold into generative AI and can afford Llama 3 locally—both in terms of memory and inference—will surely leave OpenAI (or Google or Anthropic). OpenAI keeps doubling down on enterprise features but wealthy customers remain dubious of the safety, reliability, privacy, steerability, and applicability of generative AI models built by others. It’s too long a list of reservations to use a tech product whose value proposition remains unclear at best, which has yet to prove itself up to the task (and which, for some reason, never comes with a manual). Non-model-building labs like Cognition and big tech latecomers like Apple will be willing to bear the additional costs in exchange for better safety, privacy, and steerability (although Apple may have a different idea in mind). They’ll gladly take Meta’s gift unless OpenAI and the others can offer a qualitatively better private option.
So it all comes down to one question: Can OpenAI offer a private AI model that’s good enough to both keep its customers satisfied without switching and convince them that their preferred vision—that AI is the main course, not a commodity to be pathetically subordinated to content creation and ad sales—is the one shaping this timeline?
What if GPT-5 is a masterpiece? What if it’s a feat of engineering as good as roon’s snarky comment on Meta’s “wasting all those beautiful H100s” suggests? What if it’s so powerful that OpenAI manages to hold the subsequent advantage for as long as it did from GPT-3 onward? What if once GPT-5 is out OpenAI chooses to make GPT-4 free to use? If they do, who in their right mind would choose to take the Llama 3 path? I bet some will but most people won’t.
We don’t know how any of this will unfold; Meta might be changing the rules but OpenAI keeps ruling the changes. And there’s still a game left to play.
Good job doing your homework, as this allows you to make some very good quality contracture. I’m personally hoping that GPT4 will eventually become free because my business model will be greatly assisted by that development. I’m not sure what the odds are, but of course, if we have the momentum of progress on our side, then it shouldn’t take forever.
OpenAI's groundbreaking innovations set the stage for transformative changes, and Meta's visionary approach reshapes how we perceive and engage with technology, together paving the way for an exciting future of possibilities.