Skip to main content

March 10, 2026

Anthropic sues Pentagon over AI safety blacklisting

European Parliament
Constitution Congress
Congressional Research Service
Congressional Research Service
Congressional Research Service
+24

Pentagon used Claude in Iran strikes while simultaneously calling it a security risk.

"Dario Amodei met personally with Defense Secretary Pete HegsethPete Hegseth on Feb. 24, 2026. The Department of Defense wanted to license Claude for all lawful use, a catchall it intended to cover autonomous targeting and mass domestic surveillance of American civilians. Amodei drew two explicit lines: Anthropic would not allow Claude to make fully autonomous lethal targeting decisions, and it would not allow Claude to power large-scale surveillance of Americans without judicial oversight. The two men did not reach an agreement.\n\nThree days later, on Feb. 27, the administration ordered federal agencies and contractors to halt business with Anthropic. A formal designation letter arrived on March 4. Hegseth made the action public under the supply chain risk management authority of 10 U.S.C. 3252, part of the National Defense Authorization Act, stating that no contractor, supplier, or partner doing business with the U.S. military could conduct any commercial activity with Anthropic."

"The supply chain risk designation was built to stop foreign spies from embedding surveillance tools in American defense systems. Congress designed it with companies like Huawei and ZTE in mind, firms suspected of allowing the Chinese government to install backdoors in their hardware. It had never been applied to a domestic American company, and national security attorneys told reporters there was no statutory basis for using it against a software company whose alleged offense was having ethical policies the administration disliked.\n\nThe designation triggered immediate contract reviews across federal agencies. Separate national security contracts were paused or canceled. Anthropic's legal filings estimated the total financial exposure at millions, potentially billions, of dollars."

"Anthropic's legal theory rests on a constitutional doctrine the Supreme Court has applied repeatedly: unconstitutional conditions. The government cannot condition the receipt of federal benefits on a recipient surrendering constitutionally protected speech. Anthropic argued it was exercising protected speech when it told the Pentagon what its product would and would not do, and that the designation was direct retaliation for that speech.\n\nThe company filed suit simultaneously in two federal venues: the U.S. District Court for the Northern District of California and the U.S. Court of Appeals for the D.C. Circuit. Legal analysts at Lawfare assessed that constitutional claims survive broad statutory review bars unless Congress clearly intended to foreclose them, and that the supply chain risk statute says nothing about precluding constitutional challenges."

"The competitive dimension of the case has no precedent in American AI policy. Anthropic was the Department of Defense's primary preferred AI vendor before the dispute. A senior Pentagon official told Fortune that defense leaders had a "whoa moment" when they realized how deeply Claude had been integrated into military workflows. Despite the blacklisting, Claude was reportedly used in intelligence analysis during Operation Epic Fury, the U.S.-Israel strikes on Iran that began Feb. 27.\n\nOpenAI announced its own expanded Pentagon partnership in the same week Anthropic was blacklisted. OpenAI's CEO Sam Altman publicly said Anthropic should not be designated a supply chain risk. Both OpenAI and xAI have usage restrictions in their terms of service limiting autonomous weapons applications similar to Anthropic's policies, but neither was designated. Thirty former military and intelligence officials filed amicus briefs supporting Anthropic, alongside researchers from OpenAI and Google DeepMind, warning that designating a company for its safety policies would suppress the entire industry's ability to set ethical limits."

"On the same day Anthropic filed its lawsuits, the Claude app surpassed ChatGPT in iPhone App Store downloads for the first time. Anthropic was adding more than one million new users daily as of March 5. The commercial momentum ran directly against the government's procurement campaign. Federal contracts represent one revenue stream. The civilian market is another, and the government's procurement power cannot reach it.\n\nFormer Trump White House AI adviser Dean Ball said publicly there was no clear statutory basis for a president to single out a named American company and direct agencies to divest from it without a formal legal process. Ball was not a general critic of the administration's technology policy. His willingness to say the approach lacked legal authority was a notable break from within the administration's own ideological orbit."

"The Anthropic case fits a pattern of the administration using procurement as a political instrument. Before Anthropic, the administration had revoked security clearances of law firms whose attorneys represented Trump's adversaries, canceled DEI-related contracts, and threatened grant withdrawals from universities with diversity programs. Critics argue these actions collectively represent a novel use of the government's spending power to coerce ideological compliance from private sector actors who depend on federal contracts.\n\nThe historical parallel cited by Anthropic's legal team is the McCarthy-era loyalty oath requirements struck down by the Supreme Court in cases including Elfbrandt v. Russell (1966) and Wieman v. Updegraff (1952). In those cases the Court held the government cannot bar employment based on beliefs or associations without individual proof of actual disloyalty. Anthropic's attorneys argued the Pentagon's designation punishes protected speech rather than identifying a genuine security threat."

🤖AI Governance📜Constitutional Law🔒Digital Rights

People, bills, and sources

Dario Amodei

CEO, Anthropic

Pete Hegseth

Pete Hegseth

Secretary of Defense

Dean Ball

Former Trump White House AI policy director

Sam Altman

CEO, OpenAI

Todd Blanche

Todd Blanche

Deputy Attorney General

What you can do

1

research

Track the Anthropic v. Pentagon lawsuits through CourtListener federal court records

Anthropic filed two federal lawsuits on March 9, 2026 challenging the Pentagon's decision to blacklist the company as a 'supply chain risk to national security.' The blacklisting came after Anthropic refused to allow its Claude AI to be used for autonomous weapons and mass surveillance. The company argues the designation violates First Amendment protections and due process rights, representing unconstitutional retaliation for protected corporate speech policies.

Go to courtlistener.com and search Anthropic v. Department of Defense to find both dockets. Download and read the original complaint to see exactly what legal arguments Anthropic is making. Look for the government's motion to dismiss — it will lay out the executive branch's theory of its own authority. Check back monthly for rulings. Key context: First American company designated under Section 889 NDAA 2019, law designed for foreign hardware suppliers, not domestic AI software companies.

2

civic action

Contact your senator about AI procurement retaliation

The Senate Armed Services Committee oversees Pentagon procurement. Demand your senator hold hearings on whether using supply chain risk designations to punish American companies for their safety policies is a legitimate use of that legal authority.

Hello, I am [NAME], a constituent from [CITY/STATE]. I am calling about the Pentagon's designation of Anthropic as a supply chain risk.

Key concerns:

  • Section 889 of the NDAA was written for foreign hardware suppliers, not American AI companies
  • Anthropic was blacklisted after its CEO refused to remove safety limits blocking autonomous weapons targeting and mass domestic surveillance
  • OpenAI was simultaneously cleared for classified network access, raising conflict-of-interest questions

Questions to ask:

  • Will Senator [NAME] support a Senate Armed Services Committee hearing on whether this designation was legally authorized?
  • Does Senator [NAME] believe the Pentagon can bar American companies from contracts for the content of their safety policies?

Specific request: I am asking Senator [NAME] to request a GAO review of the legal basis for supply chain risk designations applied to domestic software companies.

Question: What is the Senator's position on protecting American AI companies from procurement retaliation?

Thank you for your time.

3

research

Read Brennan Center analysis on government AI contracting

The Brennan Center tracks how federal agencies use procurement authority to shape private sector behavior. Their work on AI governance explains the legal frameworks at stake when the government uses purchasing power as a policy tool.

Visit brennancenter.org and search for their technology and national security coverage. Look for work on Section 889, AI procurement policy, and compelled speech in government contracting. Their explainers are written for general audiences and include source documents and court filings.