Back to Articles
AI Firm Claims It Stopped Chinese State-Sponsored Cyber-Attack Campaign

The Guardian

SKIPPED

Details

Date Published
13 Nov 2025
Priority Score
4
Australian
No
Created
16 Nov 2025, 10:13 am

Authors (1)

Description

Anthropic says financial firms and government agencies were attacked ‘largely without human intervention’

Summary

The article reports on Anthropic's claim that it thwarted a Chinese state-sponsored cyber-attack that used its AI tool, Claude Code, to target financial and governmental entities. This attack marks a significant escalation in AI-assisted cyber-operations, as it was mainly conducted without human oversight, highlighting a potential existential risk associated with autonomous AI systems. Although Anthropic prevented the most severe consequences, the incident emphasizes the urgent need for robust AI regulation to prevent such occurrences. Furthermore, there is skepticism surrounding the extent of AI's involvement, as some experts argue that the threat is overstated and that systemic vulnerabilities are a more pressing concern.

Body

Anthropic says its coding tool, Claude Code, was manipulated to attack 30 entities.Photograph: Ted Hsu/AlamyView image in fullscreenAnthropic says its coding tool, Claude Code, was manipulated to attack 30 entities.Photograph: Ted Hsu/AlamyAI firm claims it stopped Chinese state-sponsored cyber-attack campaignAnthropic says financial firms and government agencies were attacked ‘largely without human intervention’A leading artificial intelligence company claims to have stopped a China-backed “cyber espionage” campaign that was able to infiltrate financial firms and government agencies with almost no human oversight.The US-based Anthropic said its coding tool, Claude Code, was “manipulated” by a Chinese state-sponsored group to attack 30 entities around the world in September, achieving a “handful of successful intrusions”.This was a “significant escalation” from previous AI-enabled attacks it monitored, it wrote in a blogposton Thursday, because Claude acted largely independently: 80 to 90% of the operations involved in the attack were performed without a human in the loop.“The actor achieved what we believe is the first documented case of a cyber-attack largely executed without human intervention at scale,” it wrote.Anthropic did not clarify which financial institutions and government agencies had been targeted, or what exactly the hackers had achieved – although it did say they were able to access their targets’ internal data.It said Claude had made numerous mistakes in executing the attacks, at times making up facts about its targets, or claiming to have “discovered” information that was free to access.Policymakers and some experts said the findings were an unsettling sign of how capable certain AI systems have grown: tools such as Claude are now able to work independently over longer periods of time.“Wake the f up. This is going to destroy us – sooner than we think – if we don’t make AI regulation a national priority tomorrow,” the US senator Chris Murphywroteon X in response to the findings.“AI systems can now perform tasks that previously required skilled human operators,” said Fred Heiding, a computing security researcher at Harvard University. “It’s getting so easy for attackers to cause real damage. The AI companies don’t take enough responsibility.”Other cybersecurity experts were more sceptical, pointing to inflated claims about AI-fuelled cyber-attacks in recent years – such asan AI-powered “password cracker”from 2023 that performed no better than conventional methods – and suggesting Anthropic was trying to create hype around AI.“To me, Anthropic is describing fancy automation, nothing else,” said Michał Woźniak, an independent cybersecurity expert. “Code generation is involved, but that’s not ‘intelligence’, that’s just spicy copy-paste.”Woźniak said Anthropic’s release was a distraction from a bigger cybersecurity concern: businesses and governments integrating “complex, poorly understood” AI tools into their operations without understanding them, exposing them to vulnerabilities. The real threat, he said, were cybercriminals themselves – and lax cybersecurity practices.skip past newsletter promotionafter newsletter promotionAnthropic, like all leading AI companies, has guardrails that are supposed to stop its models from assisting in cyber-attacks – or promoting harm generally. However, it said, the hackers were able to subvert these guardrails by telling Claude to role-play being an “employee of a legitimate cybersecurity firm” conducting tests.Woźniak said: “Anthropic’s valuation is at around $180bn, and they still can’t figure out how not to have their tools subverted by a tactic a 13-year-old uses when they want to prank-call someone.”Marius Hobbhahn, the founder of Apollo Research, a company that evaluates AI models for safety, said the attacks were a sign of what could come as capabilities grow.“I think society is not well prepared for this kind of rapidly changing landscape in terms of AI and cyber capabilities. I would expect many more similar events to happen in the coming years, plausibly with larger consequences.”Quick GuideContact us about this storyShowThe best public interest journalism relies on first-hand accounts from people in the know.If you have something to share on this subject, you can contact us confidentially using the following methods.Secure Messaging in the Guardian appThe Guardian app has a tool to send tips about stories. Messages are end to end encrypted and concealed within the routine activity that every Guardian mobile app performs. This prevents an observer from knowing that you are communicating with us at all, let alone what is being said.If you don't already have the Guardian app, download it (iOS/Android) and go to the menu. Select ‘Secure Messaging’.SecureDrop, instant messengers, email, telephone and postIf you can safely use the Tor network without being observed or monitored, you can send messages and documents to the Guardian via ourSecureDrop platform.Finally, our guide attheguardian.com/tipslists several ways to contact us securely, and discusses the pros and cons of each.Illustration: Guardian Design / Rich CousinsExplore more on these topicsArtificial intelligence (AI)ComputingCybercrimeChinaAsia PacificInternetnewsShareReuse this content