Back to Articles
Mythos: Are Fears Over New AI Model Panic or PR? – Podcast

The Guardian

ENRICHED

Details

Date Published
21 Apr 2026
Priority Score
4
Australian
No
Created
21 Apr 2026, 08:00 am

Authors (5)

Description

Ian Sample hears from Aisha Down, a reporter covering artificial intelligence for the Guardian

Summary

This discussion scrutinizes Anthropic’s decision to withhold 'Mythos Preview', a frontier model claimed to possess dangerous capabilities in software vulnerability exploitation. The analysis evaluates whether such claims of severe risk to national security and global economies are scientifically grounded or serve as a strategic marketing tool to encourage industry regulation. This debate is central to AI safety discourse as it highlights the tension between public transparency, corporate responsibility, and the management of catastrophic risks associated with advanced autonomous capabilities.

Body

Mythos: are fears over new AI model panic or PR? – podcast00:00:0000:00:00Earlier this month the AI company Anthropic said it had created a model so powerful that, out of a sense of responsibility, it was not going to release it to the public. Anthropic says the model, Mythos Preview, excels at spotting and exploiting vulnerabilities in software, and could pose a severe risk to economies, public safety and national security. But is this the whole story? Some experts have expressed scepticism about the extent of the model’s capabilities. Ian Sample hears from Aisha Down, a reporter covering artificial intelligence for the Guardian, to find what the decision to limit access to Mythos reveals about Anthropic’s strategy, and whether the model might finally spur more regulation of the industry.‘Too powerful for the public’: inside Anthropic’s bid to win the AI publicity warSupport the Guardian: theguardian.com/sciencepod Photograph: Samuel Boivin/NurPhoto/ShutterstockExplore more on these topicsScienceScience WeeklyAI (artificial intelligence)Hacking