March 10, 2026
Anthropic sues Pentagon over AI safety blacklisting
Pentagon used Claude in Iran strikes while simultaneously calling it a security risk.
March 10, 2026
Pentagon used Claude in Iran strikes while simultaneously calling it a security risk.
"Dario Amodei met personally with Defense Secretary
Pete Hegseth on Feb. 24, 2026. The Department of Defense wanted to license Claude for all lawful use, a catchall it intended to cover autonomous targeting and mass domestic surveillance of American civilians. Amodei drew two explicit lines: Anthropic would not allow Claude to make fully autonomous lethal targeting decisions, and it would not allow Claude to power large-scale surveillance of Americans without judicial oversight. The two men did not reach an agreement.\n\nThree days later, on Feb. 27, the administration ordered federal agencies and contractors to halt business with Anthropic. A formal designation letter arrived on March 4. Hegseth made the action public under the supply chain risk management authority of 10 U.S.C. 3252, part of the National Defense Authorization Act, stating that no contractor, supplier, or partner doing business with the U.S. military could conduct any commercial activity with Anthropic."
"The supply chain risk designation was built to stop foreign spies from embedding surveillance tools in American defense systems. Congress designed it with companies like Huawei and ZTE in mind, firms suspected of allowing the Chinese government to install backdoors in their hardware. It had never been applied to a domestic American company, and national security attorneys told reporters there was no statutory basis for using it against a software company whose alleged offense was having ethical policies the administration disliked.\n\nThe designation triggered immediate contract reviews across federal agencies. Separate national security contracts were paused or canceled. Anthropic's legal filings estimated the total financial exposure at millions, potentially billions, of dollars."
"Anthropic's legal theory rests on a constitutional doctrine the Supreme Court has applied repeatedly: unconstitutional conditions. The government cannot condition the receipt of federal benefits on a recipient surrendering constitutionally protected speech. Anthropic argued it was exercising protected speech when it told the Pentagon what its product would and would not do, and that the designation was direct retaliation for that speech.\n\nThe company filed suit simultaneously in two federal venues: the U.S. District Court for the Northern District of California and the U.S. Court of Appeals for the D.C. Circuit. Legal analysts at Lawfare assessed that constitutional claims survive broad statutory review bars unless Congress clearly intended to foreclose them, and that the supply chain risk statute says nothing about precluding constitutional challenges."
"The competitive dimension of the case has no precedent in American AI policy. Anthropic was the Department of Defense's primary preferred AI vendor before the dispute. A senior Pentagon official told Fortune that defense leaders had a "whoa moment" when they realized how deeply Claude had been integrated into military workflows. Despite the blacklisting, Claude was reportedly used in intelligence analysis during Operation Epic Fury, the U.S.-Israel strikes on Iran that began Feb. 27.\n\nOpenAI announced its own expanded Pentagon partnership in the same week Anthropic was blacklisted. OpenAI's CEO Sam Altman publicly said Anthropic should not be designated a supply chain risk. Both OpenAI and xAI have usage restrictions in their terms of service limiting autonomous weapons applications similar to Anthropic's policies, but neither was designated. Thirty former military and intelligence officials filed amicus briefs supporting Anthropic, alongside researchers from OpenAI and Google DeepMind, warning that designating a company for its safety policies would suppress the entire industry's ability to set ethical limits."
"On the same day Anthropic filed its lawsuits, the Claude app surpassed ChatGPT in iPhone App Store downloads for the first time. Anthropic was adding more than one million new users daily as of March 5. The commercial momentum ran directly against the government's procurement campaign. Federal contracts represent one revenue stream. The civilian market is another, and the government's procurement power cannot reach it.\n\nFormer Trump White House AI adviser Dean Ball said publicly there was no clear statutory basis for a president to single out a named American company and direct agencies to divest from it without a formal legal process. Ball was not a general critic of the administration's technology policy. His willingness to say the approach lacked legal authority was a notable break from within the administration's own ideological orbit."
"The Anthropic case fits a pattern of the administration using procurement as a political instrument. Before Anthropic, the administration had revoked security clearances of law firms whose attorneys represented Trump's adversaries, canceled DEI-related contracts, and threatened grant withdrawals from universities with diversity programs. Critics argue these actions collectively represent a novel use of the government's spending power to coerce ideological compliance from private sector actors who depend on federal contracts.\n\nThe historical parallel cited by Anthropic's legal team is the McCarthy-era loyalty oath requirements struck down by the Supreme Court in cases including Elfbrandt v. Russell (1966) and Wieman v. Updegraff (1952). In those cases the Court held the government cannot bar employment based on beliefs or associations without individual proof of actual disloyalty. Anthropic's attorneys argued the Pentagon's designation punishes protected speech rather than identifying a genuine security threat."
CEO, Anthropic
Secretary of Defense
Former Trump White House AI policy director
CEO, OpenAI
Deputy Attorney General