AI Just Entered Its Manhattan Project Era
Thoughts on Anthropic vs the Pentagon
As promised, I bring you my take on what’s going on between Anthropic, OpenAI, and the Department of Defense. Plenty of people have written about the specific details of the deals and the current situation, so I’ve chosen a different focus. As a European watching the events unfold from the other side of the pond, I hope to compensate for what I lose in distance with what I gain in perspective. I have divided this post into three acts.
Act 1: What happened so far. A quick timeline of all the relevant events. Go over to act 2 if you already know all this.
Act 2: What I think will happen next in the short term (e.g., what will be of Anthropic as a company), and why the outcome doesn’t matter much.
Act 3: What happens when AI is no longer just **a technological matter.
ACT 1: WHAT HAPPENED SO FAR
Here’s a quick timeline of the relevant events.
July 2025. Anthropic signs a $200 million contract with the Pentagon and becomes the first AI lab deployed on the Department of Defense’s classified network. The contract includes two restrictions: Anthropic’s AI cannot be used for mass surveillance of American citizens or for fully autonomous weapons that select and engage targets without human oversight.
January 2026. Defense Secretary Pete Hegseth issues an AI strategy memo directing that all Department of Defense AI contracts incorporate standard “any lawful use” language within 180 days, a direct collision with Anthropic’s restrictions. Negotiations begin.
February 2026.
Monday, 16. Axios reports the Pentagon is considering designating Anthropic a “supply chain risk to national security”—a classification previously reserved for foreign adversaries like Huawei—and invoking the Defense Production Act (DPA) if Anthropic doesn’t remove its restrictions. A senior defense official says they’re “going to make sure they pay a price.”
Tuesday 24. Hegseth and Dario meet. CNN says the “tone was cordial and respectful” and that Hegseth “praised Anthropic’s products.” Axios says the meeting was “tense.” Whatever the case, Amodei reiterated his red lines and Hegseth set a deadline: 5:01 PM ET Friday 27 to agree to “all lawful purposes” or the contract gets canceled and Anthropic gets designated a supply chain risk.
Thursday 26. Amodei publishes a statement: “We cannot in good conscience accede to their request.” Emil Michael, the Pentagon’s Undersecretary for Research and Engineering, publicly calls Amodei “a liar” with “a God-complex.” More than 600 Google employees and 90 OpenAI employees sign an open letter titled “We Will Not Be Divided,” backing Anthropic’s red lines. Altman tells OpenAI staff in an internal memo that his company shares Anthropic’s position on autonomous weapons and mass surveillance. He also expresses a similar position publicly on CNBC.
Friday 27.
3:47 PM ET: Trump posts on Truth Social, ordering every federal agency to stop using Anthropic’s technology, calling them an “out-of-control, Radical Left AI company.” Key excerpt: “I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again!”
5:14 PM ET: Hegseth applies the supply chain risk designation, placing special emphasis on the links between Anthropic and effective altruism (calling it “defective altruism”). This is what he said about Anthropic’s position: “Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military.”
9:56 PM ET: Altman announces OpenAI has reached an agreement with the Pentagon for classified network deployment, claiming the deal includes the same two restrictions Anthropic had been blacklisted for insisting on, but embedded through a safety stack whose terms nobody outside the deal can verify, paired with “any lawful purpose” language that, on its face, contradicts them.
(People are reasonably confused about why the DoD would blacklist Anthropic and then accept the same red lines from OpenAI, but the best explanations I’ve found are that they are, indeed, not the same red lines; Altman framed it as such because that’s what he does for a living, in and out of OpenAI: play with words. To his credit, he offered to do an AMA session and answered quite a lot of questions, although I don’t think witnesses left with all the information they wanted. Altman admits the deal was rushed and the optics are bad.)
Immediately after, the backlash against OpenAI starts. QuitGPT—a grassroots boycott that had been building for weeks over Greg Brockman’s $25 million donation to MAGA Inc—claims 700,000 sign-ups. (The figure is slightly above 1.5 million at the time of writing; anecdotally, my entire X timeline is people urging others to cancel ChatGPT and switch to Claude.)
10:49 PM ET: The New York Times reports that “Mr. Altman engaged in talks with the Pentagon, starting on Wednesday, over a deal for its technology” and that he “negotiated with the Department of Defense in a different way from Anthropic, agreeing to the use of OpenAI’s technology for all lawful purposes.” Meaning that Altman was already trying to replace Amodei before supporting him publicly.
ACT 2: WHAT HAPPENS NEXT
That’s a broad view of the facts. It’s unclear what happens next. So far, what I see is people doing the customary commentary. A lot of interesting posts about the implications of the events. My take is, having read a lot of them, quite a hot one: I don’t think much will change.
Let me explain why I say this, lest I misrepresent as detachment what is actually perspective-taking on an otherwise **big deal situation. We love when drama breaks—I like reading about it!—but in historical time scales, I think this will be a speck of dust in an ocean of progress. Just like Altman’s firing in 2023, or DeepSeek’s glorious entrance in 2025 in the general public’s awareness (not so much in the rankings of best AI models, though), were.
So, this is what I think happens next.
Anthropic and the DoD won’t work together again for the foreseeable future (at least until a Democrat takes the presidency).
Anthropic will successfully challenge the “supply chain risk” designation in court (it’s quite ridiculous, to be honest; it should be a quick process if it even goes that far). They could certainly use Claude’s help.
Neither Hegseth nor Trump will push further because they never actually thought Anthropic was a risk in any meaningful sense (beyond their apparent contempt toward effective altruism and the fact that Anthropic is quite openly pro-Democrat). Hegseth and Trump didn’t like that Amodei 1) repeatedly tried to impose red lines on the DoD and 2) stood his ground when threatened, but that’s it.
I’ve read takes that this was staged or outright a “scam,” in the sense that this is premeditated retaliation against Anthropic for not financially supporting Trump’s administration. I don’t think that’s correct, or they wouldn’t have worked with Anthropic at all in the first place. The fact that the DoD was working with Claude models due to a relatively small technical edge over ChatGPT or Gemini suggests all non-technological factors were, at the time, relatively smaller.
The DoD will keep OpenAI as a partner, which, in terms of “access to power” is a bad place to be in for Anthropic. You don’t want to have as an enemy the most powerful actor in the world, and the only one that, in a way, rules over you. Even if Anthropic successfully stops the supply chain risk designation, the relationship is damaged for the remaining term.
In terms of revenue loss for Anthropic, however, losing the contract is essentially inconsequential. Anthropic has a run-rate revenue of $14 billion; the DoD contract was $200 million. Anthropic is also leading in the enterprise market and there’s an ongoing user exodus from OpenAI to Anthropic (whether that exodus will also be reflected in terms of talent is unclear;, that would be a really big deal for OpenAI).
I don’t think the exodus will offset the revenue loss from the DoD contract though, meaning that, in a ranking of importance, the situation looks like this: Anthropic enterprise revenue >>> DoD contract >>> users switching from ChatGPT to Claude.
Accordingly, QuitGPT will change nothing in terms of overall numbers for OpenAI either: what are two million users for a company nearing one billion? More generally, I don’t think this kind of reactive, all-over-the-place movement can ever reach critical mass and push normies out of their habits. My first impression was even the opposite: this is bad for Anthropic because a significant fraction of those posting online that they’re changing vendors or signing on QuitGPT won’t be paying for Anthropic either way (a lot of people hated OpenAI before this and are only being vocal about their hatred for ChatGPT now that the trend has momentum, and plenty of others hate AI altogether, regardless of whether it’s Claude or ChatGPT). If two million users download Claude to attack ChatGPT and use it casually without paying, that’s a toll on Anthropic rather than a blessing. Anthropic was happy to have a high percentage of high-paying consumers in an overall much smaller user base.
I will update a bit against Altman (for being even more cunning than I already thought) and in favor of Amodei (for standing his ground, whatever it is). But, in case you are worried about the use of AI in war, you should know that OpenAI and Anthropic are pretty much the same.
ACT 3: WHEN TECHNOLOGY IS NOT A TECHNOLOGICAL MATTER
And yet. Even if nothing changes from this specific situation—even if the courts reverse the designation, the revenue impact is negligible, and everyone moves on in three weeks, as it tends to happen with these things that feel huge in the moment but dissolve in the impossibly greater inertia of the world—these events reveal, indirectly, what I think is a very consequential shift in how we should think about the AI industry and AI as a technology.
The US government just showed everyone what it can do, or, more importantly, what it’s willing to do. That’s AI transforming from mostly a technological matter to mostly a political matter and not precisely in a democratic way (that’s what Amodei was referring to when he said “disagreeing with the government is the most American thing in the world”). Of course, the US government wouldn’t allow the AI industry to push forward without supervision forever. As soon as generative AI proved to be a significant geopolitical factor—and it’s pretty clearly at that point—they’d seize control over it. Once they let Claude into their classified network and realized the level of capability frontier models have achieved, the fate of this technology was sealed.
So even if “Anthropic vs the DoD” will be relegated to a footnote in history books, the higher-level shift from techne to politeia (with a sprinkle of kratos) **is permanent: AI is no longer about the art of making models or nerds honing their tuning craft, but about the arena of political and geopolitical power. We’re entering the Project Manhattan era of AI. I don’t know if the US government will end up creating—or, failing that, nationalizing—a big AI project, but they are well aware that the technology has become quite useful and is acting accordingly.
So, what you should update on is not that OpenAI won and Anthropic’s dead—or, depending on who you ask, the exact opposite—but something bigger: that you cannot predict the future of AI as a technology by assuming the influence of external actors like the DoD or geopolitical events like the war on Iran can be dismissed as a negligible term in an otherwise pristine equation that you extrapolate as lines on a chart. The entire AI forecasting apparatus—scaling laws, capability timelines, benchmark extrapolations, and so on—operates as if AI progress is a physics problem with logistical and financial constraints. Welcome to the real world. There always were forces lurking in the shadows, waiting to disrupt those equations. Those forces are now activated.
There are plenty of political and geopolitical vectors of disruption that will only become more common as AI gets better and thus more transcendental to the future. This won’t necessarily slow progress, but whatever comes next looks nothing like the predictable path so far, where a bunch of private companies blithely compete to see which one makes better toys. Now is when technical people realize that human will was always greater than any scaling law.




Interesting framing around "As promised, I bring you my take on what’s going on between Anthropic, OpenAI, and the Department of Defense". I wonder how this holds up when you scale past a single-agent setup though. The coordination overhead can change the calculus quite a bit.
💯