Anthropic v Department of War: Who Controls AI?

February 28, 2026 · Episode Links & Takeaways

MAIN EPISODE

Who Controls AI?

The skirmish between Anthropic and the Pentagon turned into an all-out war on Friday night. Not only did the Trump administration decide to stop working with Anthropic — they're attacking them in ways that go far beyond declining to do business. The question at the heart of this is extremely easy to ask and much more difficult to answer: who controls AI? This is the moment AI ethics stopped being theoretical and became geopolitical.

WHOEVER CONTROLS THE WEIGHTS, CONTROLS THE UNIVERSE

Thursday: Dario Stands Firm
Anthropic is not a pacifist organization — they've been more vocal about China not having access to advanced technology than some of their peers. This is about safeguards, not anti-war principle.
Dario's blog post declared his belief in "the existential importance of using AI to defend the United States" but restated two red lines: no mass domestic surveillance, no fully autonomous weapons. He pointed out the Pentagon's threats are "inherently contradictory" — one labels Claude a security risk, the other labels it essential to national security. "These threats do not change our position: we cannot in good conscience accede to their request." Pentagon spokesperson Sean Parnell called the narrative "fake and being peddled by leftists in the media." Under Secretary Emil Michael called Amodei "a liar" with "a God-complex." Over 200 Google and OpenAI staff signed a petition supporting Anthropic's red lines.

Friday Morning: Sam Altman Lines Up
While a lot of folks on social media were excited that Altman seemed to be lining up alongside Anthropic, OpenAI was clearly having conversations with the DOD at the same time.
In a late Thursday memo, Altman wrote: "We've long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines." On CNBC Friday morning he said: "For all the differences I have with Anthropic, I mostly trust them as a company and I think they really do care about safety." But the memo also explicitly said OpenAI was exploring whether they could deploy models in classified environments.

Friday Afternoon: Trump Goes Nuclear
The White House hadn't weighed in until this moment — nothing from Sacks, nothing from the President, nothing from the VP. Then it went from zero to a hundred.
Trump took to Truth Social in all caps: "THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS!" He directed every federal agency to "IMMEDIATELY CEASE all use of Anthropic's technology" with a six-month phase-out, and threatened "major civil and criminal consequences." Shortly after, Hegseth designated Anthropic a supply chain risk — a label historically reserved for US adversaries, never before applied to an American company — and declared that no contractor doing business with the military may conduct any commercial activity with Anthropic.

Legal Analysis: “Attempted Corporate Murder”
All of this happened within the span of an hour or two. Immediately the lawyers started figuring out what the implications actually were.
Senior research fellow Charlie Bullock noted that under the statute Hegseth is relying on, there are multiple requirements — risk assessment, written determination, congressional notification — that almost certainly weren't completed between 5 PM and Hegseth tweeting. The bigger question: since Anthropic serves models through AWS, Google Cloud, and Azure, and all three do business with the military, does the designation mean Amazon, Microsoft, and Google can't work with Anthropic either? Dean Ball, who helped shape this administration's AI policy, called it "simply attempted corporate murder" and said he "could not possibly recommend investing in American AI to any investor."

Friday Afternoon: Anthropic Responds
Anthropic noted that so far all of their information is coming from the same source as ours — social media. They hadn't received direct communication from the DOW or the White House.
The response mostly sought to assure customers that commercial access to Claude is completely unaffected. They noted the supply chain risk designation can legally only extend to use of Claude on DOW contract work — not how contractors use Claude for other customers. They promised to challenge any designation in court. Of course, as anyone who has studied Operation Choke Point knows, when it comes to government pressure on the private sector, all you need is a little push and an implication.

OpenAI Gets the Contract
Whatever was going on behind the scenes with OpenAI and the DOD, none of us commenting on Twitter have the actual context.
Within hours, Fortune reported that Altman told staff at an all-hands that a deal was emerging. Then Altman confirmed it: "We reached an agreement with the Department of War to deploy our models in their classified network." The deal includes the same red lines Anthropic was fighting for — prohibitions on mass surveillance and human responsibility for use of force. The DOW agreed to let OpenAI build their own safety stack with forward deployed engineers. Altman even asked the DOW to offer the same terms to all AI companies. Miles Brundage noted the details are confusing and called for employees to see the actual contract. Chris McGuire from the Council on Foreign Relations pointed out that the DOW rapidly cutting a deal with another company on similar terms "massively undercuts the logic underpinning its designation of Anthropic as a supply chain risk."

THE FIVE CAMPS OF OPINION

Opininons were generally grouped into five positions: (1) Anthropic is right, there should be AI red lines; (2) Anthropic might have a reasonable moral take but the government can't be constrained by them; (3) It doesn't matter whether Anthropic is right, a private company shouldn't set government policy; (4) Anthropic's moral stance is wrong too; (5) If the government doesn't want to work with a vendor they should just not work with them — but maybe don't try to kill them.

"Anthropic Is Right"
It is not left wing to want less domestic surveillance and fully autonomous murder bots.
Erik Voorhees, a genuine libertarian willing to call out both sides, wrote: "Anthropic is def woke and lefty, but their refusal to permit Washington to use their tech to carry out warrantless mass surveillance of Americans is eminently based." Investor Dime pointed out that left/right framing doesn't belong here: "None of us voted for dystopian AI spyware surveilling us in a way that makes the Patriot Act look quaint."

"But China"
Among those really against Anthropic, mostly it came down to some version of: yeah, but China.
Mike Three: "People cheerleading for Anthropic either want China to win the AI supremacy war or they're so politically brain-rotted they don't fully understand what's at stake." Geiger Capital: "China doesn't give a crap about Anthropic's moral red lines. They are implementing AI into their entire military chain with zero democratic or civilian oversight."

Palmer Luckey: The Question of Control
Even if you disagree with where Palmer is coming out on this, he's rightly identifying that this at core is a question of control — and by extension, a question of checks and balances.
Palmer's extended argument deserves reading in full. His core point: seemingly innocuous terms like "you cannot target innocent civilians" are actually moral minefields. Who is a civilian? What makes them innocent? What about a president using Madman Theory to threaten a dictator — does the corporation get to override that? "At the end of the day, you have to believe that the American experiment is still ongoing, that people have the right to elect and unelect the authorities making these decisions." Part of the problem, and why people are sympathetic to Anthropic, is that the checks and balances on executive power — ie Congress — don't really seem to be doing their job.

"It's a Super Weapon, Not a SaaS Product"
One commenter put it bluntly: "If you build a super weapon and it lives in a data center in the USA, it's not your super weapon. You don't own or control it. The people with the aircraft carriers and nuclear weapons do. This is how the world has always worked." Nathan Lands wrote: "AI is becoming critical infrastructure. If a private company can decide how the US government is allowed to use it, that's not ethics — that's corporate leverage over a sovereign nation."

PBD (X) “It’s a super weapon and it lives in a datacenter in the US”
Nathan Lands (X) “That's not ethics — that's corporate leverage over a sovereign nation”

"The Overreach Is the Problem"
For all the people sympathetic to the government's position, it feels like where the majority get uncomfortable is not with the decision to stop working with Anthropic — but the threats and retaliatory action that came with it.
Lindy founder Flo captured the nuance many felt: the government is rightly annoyed at a vendor thinking they can dictate terms, any company has a right to refuse service, but this does not justify going ballistic and treating them as an enemy of the nation. Gail Weiner asked: "If you're a brilliant AI researcher in London or Seoul or Berlin right now, watching the President threaten criminal prosecution against an AI company for having ethics, why would you build in America?" Dean Ball: "The US government just essentially announced its intention to impose Iran-level sanctions on an American company. This is by a profoundly wide margin the most damaging policy move I have ever seen USG try to take."

The OpenAI Narrative Fallout
One part of this story that'll be interesting to watch is how it shakes out narratively for both Anthropic and OpenAI.
Peter Wildeford wrote that Altman's "moral clarity lasted barely half a day." As of this recording, Claude is number two in the App Store. There's a real non-trivial downside scenario for OpenAI that many aren't grasping — if a clean meme forms on TikTok and Instagram tying OpenAI to the Department of War, the reaction won't be analytical, it'll be visceral. Perception compounds faster than facts. Greg Brockman being one of Trump's biggest donors is already widely shared in progressive circles. For some, this Pentagon deal is going to be confirmation of an actual pattern.

The Impossible Trilemma
This is too important to just be eaten up as another culture war issue. My plea is to ignore anyone who's trying to do that to this conversation.
Mike Solana framed the damnable complication best: we don't want to force private companies to do things they don't want to do, we don't want private companies running the military, and we are in an AI arms race with a country that controls its AI labs. "I don't really see any satisfying answer here for a free society that also needs to maintain an edge against a successful authoritarian country." Kristen Faulkner nailed it: "The Anthropic-Pentagon standoff is not a tech story. It's the moment AI ethics stopped being theoretical and became geopolitical. Whoever decides the ethics of AI will be deciding the ethics of society."

ALSO REFERENCED