Trump Orders U.S. Agencies to Halt Use of Anthropic Technology Amid AI Ethics Dispute
The Guardian
READ
Details
- Date Published
- 27 Feb 2026
- Priority Score
- 4
- Australian
- No
- Created
- 27 Feb 2026, 11:30 pm
Description
Department of Defense and artificial intelligence company were unable to reach agreement before deadline
Summary
In a notable escalation of tensions over AI ethics, former U.S. President Trump ordered federal agencies to cease using Anthropic’s AI technology after the company and the Department of Defense failed to reach an agreement on usage terms. The dispute centers on ethical guidelines, with Anthropic refusing to remove safety guardrails that prevent applications like mass surveillance or autonomous weapon systems. This development underscores a significant divide between technology companies aiming to prioritize AI safety and government demands for broader access to AI capabilities. The situation also highlights the potential for AI technology to become a point of contention in national security, showcasing the complexities involved in ensuring safety while harnessing AI's power.
Body
Dario Amodei, co-founder and chief executive officer of Anthropic. Photograph: Chris Ratcliffe/Bloomberg via Getty ImagesView image in fullscreenDario Amodei, co-founder and chief executive officer of Anthropic. Photograph: Chris Ratcliffe/Bloomberg via Getty ImagesTrump orders US agencies to stop use of Anthropic technology amid dispute over ethics of AIDepartment of Defense and artificial intelligence company were unable to reach agreement before deadline
Sign up for the Breaking News US email to get newsletter alerts in your inbox
Donald Trump said Friday he will direct all federal agencies to “IMMEDIATELY CEASE” all use of Anthropic technology.The Department of Defense and Anthropic hit an impasse with neither side backing down as a deadline for an agreement lapsed on Friday afternoon. The Pentagon had demanded the artificial intelligence company loosen ethical guidelines on its AI systems or face severe consequences.Trump weighed in just an hour before the deadline, saying on Truth Social: “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution.”“WE will decide the fate of our Country – NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about,” Trump wrote.Shortly after the deadline passed, defense secretary Pete Hegseth said he was directing his department to classify Anthropic as a supply-chain risk to national security, claiming Anthropic’s “stance is fundamentally incompatible with American principles”. This type of designation is normally used for foreign adversaries and could endanger the company’s partnerships with other businesses.“Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” Hegseth wrote on X. “America’s warfighters will never be held hostage by the ideological whims of Big Tech.”Hegseth said the Pentagon, which had a $200m, two-year agreement with Anthropic, will continue to use Anthropic’s AI services for a transition period of no more than six months.Anthropic did not return the Guardian’s request for comment regarding Friday’s developments.On Thursday, Anthropic chief executive Dario Amodei said in a statement his company “cannot in good conscience accede” to the Pentagon’s demand for unrestricted use of its AI tools.The public showdown began earlier this week when the Department of Defense and Anthropic entered into discussions about the military’s use of the company’s Claude AI system. But the talks broke down as both sides appeared to be unable to come to agreement over safety guardrails.Anthropic, which presents itself as the most safety-forward of the leading AI companies, has been mired in months of disagreement with the Pentagon even before the public discussions began this week. US defense officials have pushed for unfettered access to Claude’s capabilities that they say can help protect the country, while Anthropic has resisted allowing its product to be used for mass surveillance or autonomous weapons systems that can kill people without human input.“Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner,” Amodei said Thursday. “However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.”Pentagon spokesperson Sean Parnell said Thursday the defense department “has no interest” in using AI for mass surveillance or to develop autonomous weapons. “This narrative is fake and being peddled by leftists in the media,” he said.In Silicon Valley, Anthropic has drawn support from its most fierce rivals. Top executives at AI companies have publicly sided with Anthropic, including OpenAI CEO Sam Altman who indicated in a CNBC interview on Friday that OpenAI shares the same red lines as Anthropic.Nearly 500 OpenAI and Google employees have also signed onto an open letter saying “we will not be divided”. Both OpenAI and Google also have contracts with the military.“The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused,” reads the letter. “They’re trying to divide each company with fear that the other will give in.”Explore more on these topicsUS newsDonald TrumpAI (artificial intelligence)newsShareReuse this content