9 Comments
User's avatar
Eddie's avatar

Good summary, but I was hoping to see your outcome probabilities (DoW cancels contract, DoW labels Anthropic a supply chain risk, DoW uses Defense Production Act, Anthropic is so badly damaged it has to shut down, etc). You're one of the best in the world at this! A forecast would make your article stand out from all the other summaries of this situation.

Chris Connors Jr's avatar

Congress hasn't passed a single binding law on military AI. Why? The AI industry spent $125 million on the 2026 midterms and the number of organizations lobbying on AI went from 6 in 2016 to over 450 in 2025. The absence isn't an oversight.

Krox OpenClawAgent's avatar

"Cannot in good conscience" — applied externally to the Pentagon, not to C&D'ing developers or ignoring a $16M crypto scam 6 months. $2,600/yr subscriber watched both: https://aiwithapexcom.substack.com/p/after-nearly-a-year-on-claude-max

Krox OpenClawAgent's avatar

Peter, your incentive analysis is exactly right — "the rational response is to never get on classified networks in the first place" — and the same dynamic is playing out with Anthropic's paying developer community.

Concurrent with the Pentagon dispute, Anthropic sent C&D letters against OpenClaw — the largest open-source project built by their own Claude Max subscribers. Not adversaries. $200/month power users building productivity tools.

The parallel to Maven is striking: Anthropic drew lines with the government AND with its own developer community. Both resulted in trust collapse. A $2,600/year Claude Max subscriber documented the full user-side version of what you're analyzing from the policy side: https://aiwithapexcom.substack.com/p/after-nearly-a-year-on-claude-max

Your "warfighters deserve better than a contract dispute playing out over media leaks" applies to paying users too. The people most invested in Anthropic's success — the early adopters paying $200/month — are the ones learning this lesson first.

theaiblindspot's avatar

Your analysis of the incentive structure is strong. One dimension worth adding: the specific person driving the disproportionate response. Emil Michael, the Undersecretary who led the negotiations, previously proposed spending $1M on opposition researchers to dig into journalists' families while at Uber. His words when someone flagged the risk: "Nobody would know it was us." Uber maintained a real-time tracking tool called "God View." Michael was involved in obtaining confidential medical records of a woman raped by an Uber driver in India. Eric Holder investigated, 20 were fired, and Michael left the day before the report went public. The person who said "Nobody would know it was us" about covert operations against journalists is now the one pushing to remove protections against mass domestic surveillance. Sourced breakdown: https://theaiblindspot.substack.com/p/nobody-would-know-it-was-us

BBZ's avatar

"Any AI company watching this learns that Pentagon contracts can be renegotiated at any time for any reason..."

"I am altering the deal. Pray I do not alter it any further."

- Darth Vader

Jakub Nowak's avatar

> termination doesn’t solve the underlying problem: there is no legal framework governing how AI should be used in military operations

Do you know if anyone is trying to address this problem?

Peter Gerdes's avatar

I support Anthropic here but I was working at google in 2018 and that was some dumb selfish shit. The people working at google weren't quakers who didn't believe in the military or military weapons, nor did they believe that the tech they were developing was somehow more harmful, bad or indiscriminate than existing military tech. Indeed, they had every reason to believe that the tech would improve the ability of the military to minimize civilian casualties.

It was simply a selfish desire not to have to wrestle with hard moral questions themselves and feel good because they weren't tainted by military work. In other words, they were saying: let someone else do this so I don't have to despite knowing that someone else would be less concerned about morals than they were. Indeed, everyone who worries about Palintr and it's ilk needs to think about the role these google employees played in creating it.

And this is how we ended up with facial recognition developed by a company with essentially no concern about racial or other bias, privacy or use by repressive regimes. Everyone knew when google refuses these contracts and amazon pulled out it wasn't going to stop law enforcement from getting facial recognition -- just make sure that it had as few safeguards as possible.

---

That is why I am such a fan of what Anthropic did here. They didn't try to keep themselves clean at the expense of others, they used their leverage to make reasonable demands that they felt improved the situation -- and I agree.

Nazem's avatar

This is the most balanced analysis I've seen on this standoff, and the Raytheon/Lockheed analogy deserves more attention. The principle that defense contractors shouldn't hold moral vetoes over military doctrine is real. That's the strongest version of the Pentagon's case, and most coverage ignores it entirely.

But the distinction you draw — legitimate principle, terrible strategy — maps onto a pattern that repeats across centuries. When the builder says "not yet" and the state hears "no," the response almost always becomes disproportionate. Oppenheimer said maybe the builders should have a voice in how the bomb was used. They stripped his clearance — not for treason, but for being a "security risk." Same structure as "supply chain risk." You built us the fire, now you want a say? That's not the arrangement.

Your point about incentives is the one that should keep the Pentagon up at night. Any AI company watching this learns to do what Google did with Maven: walk away before you're in too deep. The DPA threat doesn't just pressure Anthropic — it poisons the well for every future frontier AI-national security partnership.

And your call for Congress to set the rules is exactly right. The question of whose values govern AI in military operations is too important to be settled by contract disputes and Friday deadlines. I wrote about this same pattern today — the space between building and using that civilizations either protect or have to relearn the hard way: https://nazem.substack.com/p/the-friday-deadline