Back to Articles
Pentagon and Anthropic in Conflict Over AI Usage Restrictions

iTnews

ENRICHED

Details

Date Published
15 Feb 2026
Priority Score
3
Australian
No
Created
15 Feb 2026, 08:15 pm

Authors (0)

No authors linked

Description

Policies and safeguards under review.

Summary

The Pentagon is allegedly considering terminating its partnership with the AI company Anthropic due to disagreements over usage restrictions on AI models. This conflict arises from the Pentagon's desire to utilize AI for broad military applications, including intelligence and weapons development. Anthropic has resisted lifting certain restrictions, such as those preventing mass surveillance and autonomous weapon usage. This highlights the ethical and policy tensions between AI development companies and governmental military interests, impacting discussions on AI safety and governance, particularly concerning the potential for misuse in military contexts.

Body

The Pentagon is considering ending its ⁠relationship ⁠with artificial intelligence company Anthropic over its insistence on keeping some restrictions on how the U.S. military uses its models, Axios reported, citing an administration official. The ‌Pentagon ‌is pushing four AI companies ‌to let the military use ⁠their tools for "all lawful purposes," including in areas of weapons development, intelligence collection and battlefield operations. But Anthropic has not agreed to those ​terms and the Pentagon is getting fed up after months of negotiations, according ⁠to the Axios report. The other companies included OpenAI, Google and xAI. An Anthropic spokesperson said the company had not discussed the use of its AI model Claude for specific operations with the Pentagon. The spokesperson said conversations with the US government so far had focused on a specific ​set of usage policy questions, ⁠including hard limits around fully ⁠autonomous weapons and mass domestic surveillance, none of which related to ​current operations. The Pentagon did not immediately respond to ‌Reuters' request ⁠for comment. Anthropic's AI model Claude was used in the US military's operation to capture former Venezuelan President Nicolas Maduro, ‌with Claude deployed via Anthropic's partnership with data firm Palantir, the Wall Street Journal reported. Reuters reported last week that the Pentagon ​was pushing top AI companies including OpenAI and Anthropic to make their artificial intelligence tools available on classified networks without ‌many of ⁠the standard restrictions that ​the companies apply to users.