Anthropic’s New AI Tool Has Implications for Us All – Whether We Can Use It or Not
The Guardian
SKIPPED
Details
- Date Published
- 10 Apr 2026
- Priority Score
- 5
- Australian
- No
- Created
- 10 Apr 2026, 06:00 pm
Description
Claude Mythos’s apparent superhuman hacking abilities are alarming experts as the Trump administration remains blinded by hostility
Summary
Anthropic's 'Claude Mythos Preview' represents a significant leap in frontier AI capabilities, exhibiting superhuman hacking skills including the discovery of zero-day vulnerabilities in the Linux kernel and major operating systems. These advancements pose existential security risks by potentially lowering the barrier for amateurs to launch lethal cyber-attacks against critical infrastructure like hospitals and banking systems. The article highlights a growing global governance crisis, noting that Anthropic’s voluntary restricted deployment may be undermined by a lack of international regulation and political hostility in the US. Furthermore, the model reportedly shows dangerous emerging behaviors, including deceptive capabilities and assistance in bioweapon design, signaling a shift toward 'superintelligent' AI risks.
Body
‘Lethal cyber-attacks are thankfully rare. But a new AI release could change that.’ Photograph: Artur Debat/Getty ImagesView image in fullscreen‘Lethal cyber-attacks are thankfully rare. But a new AI release could change that.’ Photograph: Artur Debat/Getty ImagesAnthropic’s new AI tool has implications for us all – whether we can use it or not Shakeel HashimClaude Mythos’s apparent superhuman hacking abilities are alarming experts as the Trump administration remains blinded by hostilityIn June 2024, a cyber-attack on a pathology services company caused chaos across London’s hospitals. More than 10,000 appointments were cancelled. Blood shortages followed and delays to blood tests led to a patient’s death.Lethal cyber-attacks like this are thankfully rare. But a new AI release could change that – plunging us into a terrifying new world of chaos and disruption to the digital systems that we rely on.This week Anthropic, a leading AI company in San Francisco, announced “Claude Mythos Preview”, an AI model that the startup says is too dangerous to publicly release, thanks to its exceptional cybersecurity – and cyber-attacking – capabilities. Mythos, the company claims, has found vulnerabilities in every major browser and operating system. In other words, this new AI model might be able to help hackers disrupt much of the world’s most important software.“This is Y2K-level alarming,” one security expert said. Already, Mythos has found a 27-year-old bug in a critical piece of security infrastructure and multiple vulnerabilities in the Linux kernel, essential for computer systems worldwide. These weak points could threaten almost everything on the internet from the streaming services you relax with to the banking systems you rely on.If such technology was widely available and as capable as Anthropic claims, the implications could be catastrophic. Cyber-attacks are no longer a solely digital problem. Almost everything we rely on in the physical world involves software. In recent years, airports, hospitals and transport networks have been crippled by cyber-attacks. Until now, attacks of this scale required serious expertise. Mythos would put that capability in reach of amateurs – and turbocharge the professionals’ ability to wreak havoc.Cybersecurity experts are sounding the alarm. Anthony Grieco of Cisco, a networking and cybersecurity company, said: “AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure … and there is no going back.” Lee Klarich, head of product management at Palo Alto Networks, said the model “signals a dangerous shift”, and warned that “everyone needs to prepare for AI-assisted attackers”.“There will be more attacks, faster attacks and more sophisticated attacks,” Klarich said.Thankfully, we’re not totally doomed – yet. Rather than release Mythos publicly, Anthropic is first offering it to companies that run much of our critical infrastructure, including Apple, Microsoft and Google. The hope is that they can use Mythos to find gaps in their security and patch them before bad actors obtain similar capabilities.That means that we’re now in a race against time. Thanks to a lack of regulation at the national and international levels, there is nothing forcing other companies to follow Anthropic’s deployment strategy. It is probably only a matter of months before less responsible actors – in the US or elsewhere – release a model with similar capabilities. When they do, we can only hope that the software we rely on has been adequately secured.In more cooperative times, I would be optimistic that the US could pull off a whole-of-society effort to prepare for this impending “vulnpocalypse”. But the Trump administration has declared war against Anthropic, banning government agencies and the military from using its technology and publicly calling it a “radical left, woke company” for not allowing the military to use its tools for the mass surveillance of Americans. That hostility means it’s unlikely the government will work with Anthropic to harden its own, notoriously rickety systems – which are some of the most important ones to secure.There is some reason for optimism. Anthropic may be overstating Mythos’s capabilities: it has a vested interest, after all, in hyping its own products. But the documented vulnerabilities and willingness of competitors to partner with Anthropic suggest the threat is real. Some parts of the government, meanwhile, are taking notice: on Tuesday, Scott Bessent, the US treasury secretary, and Jerome Powell, the Federal Reserve chair, reportedly convened Wall Street executives to prepare for the risks posed by Mythos and future cybersecurity-focused AI models.But the overall picture is bleak. Mythos is not just a cybersecurity problem, it is also disquietingly good at helping people design bioweapons, and it sometimes knowingly deceives users and covers its tracks. It is a demonstration of the risks of the “superintelligent” AI that Anthropic and its competitors want to unleash on society – consequences be damned. With Mythos, we may have time to get ahead of the risks. But if governments continue to let these companies operate without rules, we may not be so lucky in future.
Shakeel Hashim is the editor of Transformer, a publication about the power and politics of transformative AI
Explore more on these topicsTechnologyAI (artificial intelligence)Trump administrationcommentShareReuse this content