11 Comments
User's avatar
Phil's avatar

Good stuff but IMO you are underestimating what is happening behind closed doors in the US admin (and in China). The race to ASI is not just national security, but can be considered an existential event at the extremes of risk mitigation. The Manhattan project in today's generation is already underway. It is one thing to remove Anthropic from the desks of employees but another to allow any private company slow down the race for ASI that is going on in air-gapped 'rooms'...

Ian Erickson's avatar

The AI-2027.com prophecy coming true. Speed up or slow down? Time will tell.

Jenny Ouyang's avatar

Thanks for this Alberto.

The "techne to politeia" shift is what worries me most as a builder. I used to optimize for model performance, cost, and API stability. Now I'm thinking about geopolitical risk and regulatory capture. When GPT-4o get deprecated, it kills some of my apps' quality overnight, sure, that was a technical decision.

But watching the government threaten Anthropic with supply chain designation for contract terms shows a different kind of fragility. You're right that the revenue numbers don't matter here.

What matters is that anyone building on these platforms just learned the rules can change for reasons that have nothing to do with the technology.

Guy Wilson's avatar

Alberto, to me this is not the Manhattan Project moment. It looks a bit more like the Second Battle of Ypres (1915) and its aftermath, when modern science completely gave itself over to the needs of war without any serious qualms. Even that is questionable, as AI has been so heavily used for targeting for the last three years. AI has been about the political, social, cultural, and economic context since at least November 2022. The technology and investments may capture the media, but they have only been half the story since Altman decided to see what his tools might be good for.

Pawel Jozefiak's avatar

The shift from technical matter to political matter happened faster than most people expected. What's strange is that the "relatively small technical edge" point you mention is doing a lot of work here. If the margins between frontier models are thin and closing, then the whole selection story becomes geopolitical preference, not performance.

That changes how you evaluate any model choice: you're no longer picking the best tool, you're picking a side. I'm not sure that's a good development for developers who just want to ship things.

Hugo's avatar
Mar 2Edited

Good points, really interesting breakdown. I am wondering what actually happens as AI models get pulled deeper into national defense and major military decisions.

I just saw reporting on a King’s College London study where GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash were thrown into 21 simulated nuclear crisis games. In 95% of them, at least one model escalated to tactical nukes. None of them ever chose surrender or full diplomatic compromise, even when they were losing. Claude recommended nukes in 64% of games, GPT-5.2 flipped from passive to aggressive under time pressure, and Gemini was the only one that went all the way to full strategic nuclear war.

Max Headroom's avatar

I so appreciate your erudite analysis of this situation, Alberto — you help so many of us step back and look at the bigger implications. Appreciate you!

Klement Gunndu's avatar

Interesting framing around "As promised, I bring you my take on what’s going on between Anthropic, OpenAI, and the Department of Defense". I wonder how this holds up when you scale past a single-agent setup though. The coordination overhead can change the calculus quite a bit.

Pam Tingiris's avatar

💯

James Maconochie's avatar

Great framing, Alberto. The shift from techne to politeia is exactly right, and it explains something you identify but don't quite name: when technology enters the political arena, language becomes the primary battleground.

Notice what's already happened. "Lawful." "Autonomous." "Constrained." "Safety stack." Each word is being actively hollowed out by actors with sovereign power to redefine them. Altman playing with words isn't incidental, in politeia, semantic precision is a strategic asset, and deliberate ambiguity is a weapon.

Your Manhattan Project analogy holds. But the arms race isn't just in compute or capability, it's in who controls the meaning of the words we use to govern all of it.

TheAntiquatedOne's avatar

I think there is one obvious thing you didn't mention: it was essentially a bailout of OpenAI that was not a bailout. Really, OpenAI was struggling, people were asking how it's going to get more money, they were losing share of corporate clients to Anthropic. And now, they get access to AWS - something that they struggled for a long time; Pentagon replacing Anthropic with them and so on? Yeah, it's a bailout. Good job, S(c)am Altman.