Back to Articles
US AI Giant Accuses Chinese Rivals of Mass Data Theft

The Guardian

ENRICHED

Details

Date Published
23 Feb 2026
Priority Score
3
Australian
Unknown
Created
24 Feb 2026, 01:00 am

Description

Anthropic says three Chinese firms used ‘distillation’ technique to extract information from its Claude chatbot

Summary

The article details accusations by US AI company Anthropic against three Chinese firms for allegedly using 'distillation' to extract capabilities from its Claude chatbot, effectively committing industrial-scale intellectual property theft. This raises significant national security concerns, as these distilled models may lack safety guardrails, potentially facilitating the development of harmful technologies like bioweapons or cyberattacks. The incident underscores the growing challenges in international AI governance and the need for coordinated efforts to curb unauthorized usage of advanced AI capabilities. The claims also reflect tensions in the global AI landscape, stressing the importance of safeguarding sensitive AI technologies.

Body

Anthropic said that the campaigns of alleged theft were ‘growing in intensity and sophistication’. Photograph: Dado Ruvić/ReutersView image in fullscreenAnthropic said that the campaigns of alleged theft were ‘growing in intensity and sophistication’. Photograph: Dado Ruvić/ReutersUS AI giant accuses Chinese rivals of mass data theftAnthropic says three Chinese firms used ‘distillation’ technique to extract information from its Claude chatbotUS artificial intelligence company Anthropic said on Monday it had uncovered campaigns by three Chinese AI firms to illicitly extract capabilities from its Claude chatbot, in what it described as industrial-scale intellectual property theft. OpenAI leveled similar charges last month.Anthropic said DeepSeek, Moonshot AI, and MiniMax used a technique known as “distillation” – using outputs from a more powerful AI system to rapidly boost the performance of a less capable one.“These campaigns are growing in intensity and sophistication,” the company said in a statement. “The window to act is narrow.”Distillation is a common practice within AI development, often used by companies to create cheaper, smaller versions of their own models.The practice grabbed headlines a year ago when the release of a low-cost generative AI model from DeepSeek performed at a similar level to ChatGPT and other top American chatbots, upending assumptions of US dominance in the sensitive sector.Anthropic said the companies achieved their ends through approximately 16 million exchanges with its Claude model and 24,000 fake accounts.These allowed the three labs to siphon off capabilities they had not independently developed, at a fraction of the cost – and in so doing circumvented export controls on powerful US technology intended to preserve American dominance in AI.The company argued the practice posed national security risks, saying models built through illicit distillation are unlikely to retain safety guardrails designed to prevent misuse – such as restrictions on helping develop bioweapons or enabling cyberattacks.Anthropic’s arch-rival OpenAI, creator of ChatGPT, made similar accusations to US lawmakers earlier this month, saying Chinese companies were using the technique amid “ongoing efforts to free-ride on the capabilities developed by OpenAI and other US frontier labs”.Anthropic said MiniMax ran the largest operation, generating more than 13 million exchanges. Each campaign concentrated heavily on coding, agentic reasoning, and tool use – areas where Claude is considered a leader.To circumvent Anthropic’s ban on commercial access from China, the labs allegedly routed traffic through proxy services that managed the vast networks of fraudulent accounts.Anthropic called for coordinated industry and government responses to address what it said no single company could tackle alone.Explore more on these topicsAI (artificial intelligence)ChinaComputingnewsShareReuse this content