Back to Articles
AI Danger Is No Longer a Myth – It’s Called Mythos
Lowy Institute
ENRICHED
Details
- Date Published
- 24 Apr 2026
- Priority Score
- 5
- Australian
- Yes
- Created
- 30 Apr 2026, 06:00 pm
Authors (4)
- Tom BarberENRICHED
- Sam RoggeveenENRICHED
- Charles Lyons-JonesNEW
- Larelle BossiNEW
Description
AI can now autonomously exploit software vulnerabilities. Australia’s regulatory frameworks are not keeping pace.
Summary
Recent advancements in frontier models like Anthropic's 'Mythos' demonstrate autonomous capabilities for chained zero-day exploit discovery, signaling a shift in cybersecurity risk. These emergent capabilities, not explicitly trained, highlight the unpredictable trajectory toward AGI and the associated catastrophic threats in biological weapon synthesis and nuclear command escalation. The analysis argues that Australia's existing regulatory frameworks are lagging and must leverage its potential as an AI infrastructure hub to influence global safety governance and mitigate existential risks.
Body
Listen to this article
On a warm Minsk afternoon in June 2010, a Belarusian computer specialist uncovered the world’s first publicly known cyber weapon. Stuxnet, as it came to be known, was a worm that caused centrifuges inside Iran’s Natanz nuclear enrichment facility to spin out of control by exploiting four zero-days – software vulnerabilities yet to be discovered or patched by developers. The cyber-attack was the culmination of a years-long, multi-billion dollar effort by the United States and Israel.Fast forward to today, however, and the ability to identify and exploit zero-day vulnerabilities is no longer the preserve of highly specialised experts or hackers. Earlier this month, AI giant Anthropic revealed that its latest model, Mythos Preview, can do just that – often chaining multiple exploits together – completely autonomously. It found and exploited thousands of vulnerabilities across every operating system, some decades old and missed by literally millions of tests.Less than two years ago the idea that AI could do this was theoretical; the proficiency of Mythos Preview has been described as an inflection point, step change, and crossing of the Rubicon in cyber security.Most notably, the model was not explicitly trained to have these capabilities. Anthropic has said “they emerged as a downstream consequence of general improvements in code, reasoning, and autonomy”. This is important, because the large language models that dominate the AI field are not written in the manner of conventional software. Engineers build the architecture and specify the training objectives, but what emerges is essentially grown, with capabilities that are not fully predictable even to those training it.If self-improving AGI is the last invention humanity will need to make, AI might well be the last opportunity.So while the Mythos Preview story has focused on cyber threats, there is a larger question at play. As an all-encompassing technology, the impact of AI will be felt across all aspects of life. And while there are clearly benefits to be had – potentially transformational – broader societal risks are also manifesting.A case in point is medical research, where AI can help with drug discovery and disease diagnostics. It can also provide detailed instructions to malicious actors wanting to create a biological or chemical weapon. The building blocks for novel pathogens have long been available to order online, but the knowledge to put them together was rare and hard to acquire. With that information now essentially democratised by AI, the risk of catastrophes such as engineered pandemics is increased.Advances in AI similarly pose challenges to nuclear command, control and communications. The integration of AI technology into early warning, tracking and threat assessment can reduce decision-making times, exacerbating the risk of miscalculation and unintended escalation. Decision support tools can also subtly shift nuclear-use authority over time.These challenges are those already posed by current AI models. What happens next could amount to a leap beyond the frontier. Experts consider the emergence of artificial general intelligence (AGI) – AI that can match human-level intelligence across domains – a realistic possibility before 2030. Once AGI can recursively improve itself, it is likely that superintelligence – AI that surpasses humans in every field – would emerge soon after.The timelines are speculative, with experts differing on when this exponential “intelligence explosion” will take place. But the uncertainty is overwhelmingly regarding when, not whether, AI will have a transformative impact.Societies need to be prepared. Minister for Industry and Innovation Tim Ayres said in an address to the Lowy Institute in December that “The choices Australia makes in this strategic environment will have consequences for generations to come.”Australia is well placed to capitalise on the opportunities AI offers. Establishing indigenous frontier labs is not feasible given the size of the Australian economy and the head start current leaders have. But AI infrastructure is a different story. Just as its stable democracy, independent legal system and existing relationships provide fertile ground for investment, so too does Australia possess the renewable potential to satiate a famously energy intensive industry. Becoming a dependable AI infrastructure hub would give Australia a foothold in the AI value chain. That is a tool Canberra can bring to bear on global AI safety and governance – if it moves quickly enough.But Australia needs to simultaneously do more on the risk mitigation front, with a vast majority of Australian AI, public policy, cybersecurity, national security and legal experts recently surveyed considering current measures to be inadequate.There is no shortage of credible proposals. Good Ancestors has mapped the regulatory gaps across five threat vectors, and Global Shield has proposed the accountability framework. The National AI plan is a start on the legislative requirements, but only a start.Australia has a mature legal, ethical, and assurance frameworks for the responsible deployment of AI, as well as a strong track record of civic participation and international engagement to shape global AI norms.If self-improving AGI is the last invention humanity will need to make, AI might well be the last opportunity. Mythos Preview is a shot across the bow – the question is how Australia responds.