Anthropic filed a federal lawsuit against the US Department of Defense and other federal agencies on Monday, challenging its designation of the AI company as a “supply-chain risk.”
The Pentagon formally sanctioned Anthropic last week, capping a weeks-long, publicly aired disagreement over limits on use of its generative AI technology for military applications such as autonomous weapons.
“We do not believe this action is legally sound, and we see no choice but to challenge it in court,” Anthropic CEO Dario Amodei wrote in a blog post on Thursday.
The lawsuit, which was filed in a federal court in California, requested that a judge reverse the designation and stop federal agencies from enforcing it. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” Anthropic said in the filing. “Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive’s unlawful campaign of retaliation.”
Anthropic is also seeking a temporary restraining order to continue its government sales. The company proposed that the government respond to that request by 9 pm Pacific on Wednesday and that a judge hold a hearing on the issue on Friday.
The AI startup, which develops a suite of AI models called Claude, is facing the possibility of losing hundreds of millions of dollars in annual revenue from the Pentagon and the rest of the US government. It also may lose the business of software companies that incorporate Claude into services they sell to federal agencies. Several Anthropic customers have reportedly said they are pursuing alternatives due to the Defense Department’s risk designation.
Amodei wrote that the “vast majority” of Anthropic’s customers will not have to make changes. The US government’s designation “plainly applies only to the use of Claude by customers as a direct part of contracts with the” military, he said. General use of Anthropic technologies by military contractors should be unaffected.
The Department of Defense, which also goes by the Department of War, declined to comment about Anthropic’s lawsuit.
White House spokesperson Liz Huston told WIRED on Friday that “our military will obey the United States Constitution—not any woke AI company’s terms of service.” She added that the administration is ensuring its “courageous warfighters have the appropriate tools they need to be successful and will guarantee that they are never held hostage by the ideological whims of any Big Tech leaders.”
Attorneys with expertise in government contracting say Anthropic faces a difficult battle in court. The rules that authorize the Department of Defense to label a tech company as a supply-chain risk don’t allow for much in the way of an appeal. “It’s 100 percent in the government’s prerogative to set the parameters of a contract,” says Brett Johnson, a partner at the law firm Snell & Wilmer. The Pentagon, he says, also has the right to express that a product of concern, if used by any of its suppliers, “hurts the government’s ability to effectuate its mission.”
Anthropic’s best chance of success in court could be proving it was singled out, Johnson says. Soon after Defense Secretary Pete Hegseth announced that he was designating Anthropic a supply-chain risk, rival OpenAI announced it had struck a new contract with the Pentagon. That could be instrumental to Anthropic’s legal argument if the company can demonstrate it was seeking similar terms as the ChatGPT developer.
OpenAI said its deal included contractual and technical means of assuring its technology would not be used for mass domestic surveillance or to direct autonomous weapons systems. It added that it opposed the action against Anthropic and did know why its rival could not reach the same deal with the government.
Military Priority
Hegseth has prioritized military adoption of AI technologies, with posters recently seen in the Pentagon showing him pointing and that read, “I want you to use AI.” The dispute with Anthropic kicked up in January after Hegseth ordered several AI suppliers to agree that the department was free to use their technologies for any lawful purpose.
Anthropic, which is the only company currently providing AI chatbot and analysis tools for the military’s most sensitive use cases, pushed back. It contends that its technologies are not yet capable enough to be used for mass domestic surveillance of Americans or fully autonomous weapons. Hegseth has said Anthropic wants veto power over judgments that should be left to the Defense Department.
Historically, supply-chain-risk designations have been reserved for keeping Chinese technologies out of US military systems to preserve operational security. Anthropic warned recently that using that label on a US company has “set a dangerous precedent.”
Last week, a coalition of tech industry groups, including TechNet, Business Software Alliance, and the Software Information Industry Association, urged the Trump administration to reconsider the designation, arguing in a letter that singling out an American company as an adversary rather than an asset would have a chilling effect on US innovation. The groups represent major technology companies including Apple, Google, Nvidia, Microsoft, Meta, IBM, Salesforce, and Oracle.
Another coalition of high-profile technologists and former national security advisers sent a similar letter to members of the Senate Armed Services Committee, warning that “the use of this authority against a domestic American company is a profound departure from its intended purpose.” The letter demanded that Congress establish clear policies on the use of AI for domestic surveillance and autonomous lethal weapons systems.
Signatories included former CIA director Michael Hayden, Harvard Law School professor Lawrence Lessig, former Andreessen Horowitz partner John O’Farrell, former undersecretary of the Army Brad Carson, and several former or retired military admirals.
US military officials use Claude mostly through independent tools that incorporate it, including Palantir’s Maven Smart System, people familiar with the matter have told WIRED. They use it for basic tasks such as document writing and more high-stakes analysis such as planning attacks, the person has said.
If the supply-chain-risk designation holds, Palantir and other military contractors would have to swap in alternatives for Claude, potentially at increased cost to the government. Several AI startups focused on defense uses, such as Vannevar Labs, in recent days have been pitching their models as capable replacements. Palantir did not respond to a request for comment.
Several government agencies outside of the military have said they would stop using Claude in order to comply with a directive from President Donald Trump last week.
Microsoft spokesperson Kate Frischmann said the tech giant would continue offering Claude to US agencies and businesses except the Department of Defense.
OpenAI did not specify when its technology would be ready to replace Claude in military software. Hegseth said phasing out Anthropic’s services could take up to six months. On Thursday, Amodei wrote that “productive conversations” with the Pentagon are ongoing to resolve the dispute, and that either way, the company will support the department “for as long as we are permitted to do so.”
Lauren Goode contributed reporting.
Updated: 3/9/26, 10 am PST: This story was updated with additional details about Anthropic’s lawsuit, comment from the White House, and to note that the Department of Defense declined to comment.