US President Donald Trump on Friday ordered every US federal agency to immediately stop using technology from AI startup Anthropic, escalating a bitter standoff over the Pentagon’s demand for unrestricted access to the company’s Claude AI model. Trump gave agencies six months to phase out Anthropic’s tools from critical military and intelligence work.Defence Secretary Pete Hegseth followed up by designating Anthropic a “supply chain risk to national security”—a label historically reserved for foreign adversaries like China’s Huawei. The designation bars any contractor, supplier, or partner doing business with the US military from conducting commercial activity with Anthropic.The crisis began earlier this week when Hegseth gave Anthropic CEO Dario Amodei a Friday deadline: let the Pentagon use Claude however it wants, or lose the company’s $200 million government contract. Anthropic refused, insisting its AI tools should not be used for mass surveillance of Americans or fully autonomous weapons systems that can kill without human oversight. The Pentagon has maintained it has “no interest” in using AI for either purpose, but Hegseth’s own post told a different story—demanding “full, unrestricted access” to Anthropic’s models and accusing the company of placing “Silicon Valley ideology above American lives.”
Anthropic digs in, says it will fight “supply chain risk” label in court
Anthropic hit back with a detailed statement, saying it would challenge the designation in court. “No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,” the company said Friday night. It called the supply chain risk tag “unprecedented” and “legally unsound,” noting it has never before been publicly applied to an American company.The company also moved to reassure its commercial customers. It said the designation, if formally adopted, would only affect the use of Claude on Department of War contract work under 10 USC 3252—not how contractors use Claude to serve other customers. Individual users and commercial API customers would be “completely unaffected,” Anthropic said.Legal experts backed that reading. University of Minnesota law professor Alan Rozenshtein said the supply chain risk label “clearly was not designed for an American company that has a contract dispute with the government.” Former Trump AI adviser Dean Ball was blunter, calling the move “attempted corporate murder.”
OpenAI backs rival Anthropic’s “red lines” on military AI use
In a rare show of solidarity, rival OpenAI CEO Sam Altman told staff he shared Anthropic’s “red lines” and wanted to “help de-escalate things.” Around 70 OpenAI employees and 175 Google staffers signed an open letter backing Anthropic’s stance, warning that the Pentagon was “trying to divide each company with fear that the other will give in.”Hours after the 5:01 pm deadline passed, Altman announced that OpenAI had reached its own deal with the Pentagon to deploy models on classified networks—with safety guardrails intact. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” Altman posted on X.The Pentagon is also ready to move forward with Elon Musk’s Grok AI on its classified systems, though current and former government officials consider it an inferior product compared to Claude.The fallout could be far-reaching. Anthropic’s Claude was the first frontier AI model deployed on the US government’s classified networks back in June 2024, and is actively used by the CIA and NSA for intelligence analysis. Forcing it off government systems could disrupt ongoing operations.


















