Back to Articles
How Mythos-class AI is changing cyber security risk

Gilbert + Tobin

ENRICHED

Details

Date Published
13 Apr 2026
Priority Score
5
Australian
Yes
Created
24 Apr 2026, 04:00 am

Authors (1)

Description

Anthropic’s Mythos-class AI marks a leap in automated cyber attacks, accelerating vulnerability discovery and forcing boards to rethink cyber risk, governance, and defence strategies. Visit Gilbert + Tobin to find out more. On 7 April 2026, Anthropic announced Claude Mythos Preview, an unreleased frontier AI model it says is unusually capable at cyber security tasks. On 13 April 2026, the UK AI Security Institute (AISI) published an independent evaluation finding the model represents a meaningful step up from prior systems in cyber performance. Anthropic says Mythos Preview has found thousands of high-severity vulnerabilities across major operating systems and web browsers. AISI reports that, in controlled testing, the model could autonomously discover and exploit vulnerabilities and carry out multi-stage attacks on vulnerable networks. This does not yet mean AI can reliably break every mature enterprise. AISI expressly says its public testing does not show that Mythos Preview can defeat well-defended systems. But it does mean that time, cost and expertise needed to find and weaponise weaknesses has fallen materially. That shift is enough to change board-level risk assessments. The issue is no longer whether only a small pool of elite researchers can do this work. It is whether attackers can increasingly automate it. Public evidence suggests the answer is already yes. Attack cycles will accelerate to machine-speed. Boards must assess whether current approaches to cyber security governance and risk management are adequate in an AI-enabled cyber security threat environment. In this note, “Mythos-class ” refers to models with the capability profile publicly described for Anthropic’s Claude Mythos Preview and the near-term successors other labs are likely to build. Anthropic itself uses the phrase “Mythos-class models” when describing the broader deployment challenge ahead.

Summary

This briefing analyzes the emergence of 'Mythos-class' frontier AI models, specifically Anthropic’s Claude Mythos Preview, which demonstrate a significant leap in autonomous cyber-offensive capabilities. These models have proven capable of discovering zero-day vulnerabilities in major operating systems and executing multi-stage attack sequences at machine speed, materially reducing the cost and expertise required for sophisticated cyberattacks. The shift represents a critical advancement in frontier AI capabilities that fundamentally alters the risk landscape for catastrophic cyber-related harms. Consequently, the authors argue that Australian boards and global governance frameworks must transition from human-speed patch cycles to automated defensive strategies to manage the heightened risk to critical infrastructure and enterprise security.

Body

Navigate to sections On this page   Close page navigation Share Share on LinkedIn Email article Print article Related sectors Technology Related services Cyber Security Technology and Digital Artificial Intelligence On 7 April 2026, Anthropic announced Claude Mythos Preview, an unreleased frontier AI model it says is unusually capable at cyber security tasks. On 13 April 2026, the UK AI Security Institute (AISI) published an independent evaluation finding the model represents a meaningful step up from prior systems in cyber performance.Anthropic says Mythos Preview has found thousands of high-severity vulnerabilities across major operating systems and web browsers. AISI reports that, in controlled testing, the model could autonomously discover and exploit vulnerabilities and carry out multi-stage attacks on vulnerable networks.This does not yet mean AI can reliably break every mature enterprise. AISI expressly says its public testing does not show that Mythos Preview can defeat well-defended systems. But it does mean that time, cost and expertise needed to find and weaponise weaknesses has fallen materially.That shift is enough to change board-level risk assessments.The issue is no longer whether only a small pool of elite researchers can do this work. It is whether attackers can increasingly automate it. Public evidence suggests the answer is already yes. Attack cycles will accelerate to machine-speed. Boards must assess whether current approaches to cyber security governance and risk management are adequate in an AI-enabled cyber security threat environment.In this note, “Mythos-class” refers to models with the capability profile publicly described for Anthropic’s Claude Mythos Preview and the near-term successors other labs are likely to build. Anthropic itself uses the phrase “Mythos-class models” when describing the broader deployment challenge ahead. What is a Mythos-class model?A Mythos-class model is not a narrow bug scanner. It is a general-purpose frontier model with unusually strong coding, reasoning, tool-use and agentic execution skills that can be directed toward cyber security work. This includes:Reading large codebases.Spotting subtle defects.Chaining vulnerabilities together and producing exploit logic. Anthropic describes Mythos Preview as a general-purpose frontier model whose cyber capabilities arise from its strong agentic coding and reasoning performance, and AISI’s evaluation found it could execute multi-step attack sequences when instructed and given network access.Anthropic says it does not plan to make Mythos Preview generally available, but it has also said its eventual goal is to enable safe deployment of Mythos-class models at scale. AISI, for its part, says future frontier models are likely to be more capable still. In other words, the present issue is not only this specific model; it is the class of capability it signals.Why Mythos-class AI changes the cyber security riskFor many years, sophisticated vulnerability research and exploit development were constrained by scarce human expertise. Mythos-class models shift that work from a scarce expert craft toward a repeatable process. Anthropic reports that Mythos Preview has identified and exploited zero-day vulnerabilities in every major operating system and browser during testing, and disclosed an example in which a full exploit-development pipeline took under a day and cost under USD2,000. AISI separately found the model could autonomously discover and exploit flaws and complete parts of a multi-stage attack chain that it estimates would take human professionals days of work.The practical consequence is a compressed window between weakness and weaponisation. Anthropic and its security partners are already warning that patch cycles, disclosure processes, triage and incident response will need to speed up materially because more vulnerabilities will be found, more quickly, and more of the work on both sides will be machine-assisted. That is why this is not merely a technology issue. It is now a board governance, operational resilience and enterprise-risk issue.The leak of Mythos in late March saw cyber-security tech stocks dramatically drop as the market priced in the troubled waters ahead for the cyber-security industry – with reportedly billions wiped out of valuations in a matter of days. Forbes summed it up well “the fear driving the sell-off is simple: if an AI can autonomously find and exploit vulnerabilities that 27 years of human review missed, what exactly are cyber security companies selling?”.Why the attackers have the advantageIn the near term, this shift favours attackers.Attackers need only one workable path. Defenders must secure an entire estate: internet-facing services, cloud environments, identities, old systems, remote access pathways and suppliers. AISI’s public caveat is important here – Mythos Preview appears capable against small, weakly defended and vulnerable enterprise systems today, even if the public record does not yet prove reliable success against hardened networks. Many organisations still have weaker seams.Attackers are not burdened by maintenance windows, uptime requirements, customer dependencies, regression testing or internal approval chains. Defenders are. Anthropic’s guidance is explicit that users and administrators will need to shorten time-to-deploy for security updates, tighten patching enforcement windows and treat dependency updates carrying security fixes as urgent. Cyber.gov.au similarly warns boards that critical vulnerabilities often require a phased response lasting weeks or months, with surge resourcing and sustained executive attention.Restrictions on the most capable models are only a temporary buffer. Mythos Preview itself is gated to selected partners, but Anthropic says wider safe deployment is the long-term goal and AISI expects more capable models to follow. Governance therefore cannot rest on the hope that the best offensive AI will remain bottled up.Defenders must spend more simply to keep pace. Anthropic says vulnerability discovery will accelerate, patching must speed up, and incident-response pipelines will need more automation because most programs cannot “staff their way through” the resulting volume. OAIC guidance similarly stresses the need for up-to-date breach response plans, audit logging, access controls and patching, while Australian Prudential Regulation Authority’s (APRA) CPS 234 already requires capability and control testing commensurate with the rate at which threats change.Importantly, this is not a case for panic or for abandoning cyber basics. AISI’s message is the opposite: patching, robust access controls, sound security configuration and comprehensive logging remain central. The difference is that organisations should assume these basics will need to be executed faster, more consistently and at greater scale.Where will AI hit cyber security firstVulnerability identification and exploit creation: This is the clearest shift. Anthropic says Mythos Preview can identify and exploit previously unknown flaws in major operating systems and browsers and can chain multiple bugs together; AISI found meaningful gains on expert cyber challenges and on a 32-step attack simulation, where Mythos Preview was the first model to solve the scenario end-to-end in some runs. For boards, the takeaway is simple: serious exploit development is becoming faster, cheaper and less dependent on rare individual talent.Patching and compensating controls: As exploit creation accelerates, patching becomes a race rather than routine maintenance. Where rapid patching is impossible, organisations will need compensating controls more often and more quickly – isolation, reduced internet exposure, tighter privileged access, enhanced monitoring, temporary service changes and other workarounds. Anthropic is already calling for shorter patch cycles, tighter enforcement windows, triage scaling and patching automation, while Cyber.gov.au tells boards to plan for sustained, board-visible responses to critical vulnerabilities.Supply chain risk: The same model capability that finds bugs in your code can be pointed at shared components, common dependencies, vendor appliances and other third-party services. That means a single newly discovered flaw in widely used software can create simultaneous exposure across many organisations. APRA already requires regulated entities to assess the information-security capability of related and third parties, and Cyber.gov.au tells boards to ask directly whether their supply chain is affected during major vulnerability events. Mythos-class capability makes that supplier oversight materially more urgent.AI for defence: The answer is not to avoid AI, because manual defence will struggle to keep up. Anthropic expressly recommends that defenders use currently available frontier models now for bug-finding, triage, patch drafting, misconfiguration analysis, pull-request review and even legacy migration. The new risk is that defensive AI itself becomes part of the control environment – boards will need visibility over where sensitive code and logs are sent, who can approve model use, how outputs are validated and how dependent the organisation becomes on a small number of providers or restricted-access programs.Custom code and legacy systems: Mythos-class models are particularly concerning for environments that rely on bespoke applications, poorly documented integrations and ageing platforms. Anthropic’s disclosures include vulnerabilities that survived 16 to 27 years of human review, and Australian guidance warns boards to understand exposure arising from legacy and shadow IT and to have a plan where patching is difficult or the original developer no longer supports the software. If publicly scrutinised code can contain decades-old flaws, bespoke internal applications should not be assumed safer merely because they are obscure.Project GlasswingIn parallel to the release of Mythos Preview, and to address the issues flagged above, Anthropic has also launched an initiative called Project Glasswing, which brings together key players in the cyber security ecosystem to use Mythos Preview for defensive and hardening activities. Project Glasswing launched with Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, Crowdstrike, Google, JPMorganChase, Linux Foundation, Microsoft, NVIDI and Alo Alto Networks.This is an attempt to get ahead of the issue through controlled early access to the Mythos Previous, and to address the otherwise natural asymmetries that exist in the ecosystem. Bottom line Mythos-class models do not mean every enterprise will be compromised tomorrow. They do mean that the economics of cyber offence have changed. A capability once limited by rare human expertise is moving toward automation, repetition and scale. The immediate risks relate to the resources and activity systems needed to remediate the new vulnerabilities the Glasswing cohort will soon start disclosing. In the short to medium-term, boards will need to assess cyber risk governance and risk management capabilities more generally, including for non-Glasswing suppliers that support crown jewel systems, or mission-critical Operational Technology. IT and Info Sec teams will need to review their own adoption of Mythos-class models as part of software and code development and consider how to onboard and deploy AI defensive tooling more rapidly to improve detection and response capabilities. For boards, the prudent course is to reassess now, before broader capability diffusion forces that reassessment under incident conditions.