Back to Articles
Harry and Meghan Join AI Pioneers in Call for Ban on Superintelligent Systems

The Guardian

SKIPPED

Details

Date Published
21 Oct 2025
Priority Score
5
Australian
No
Created
22 Oct 2025, 11:36 am

Authors (1)

Description

Nobel laureates also sign letter saying ASI technology should be barred until there is consensus that it can be developed ‘safely’

Summary

The article details a significant call by notable public figures, including the Duke and Duchess of Sussex, AI pioneers, and Nobel laureates, to ban the development of superintelligent AI systems until safety measures are robustly established. This coalition seeks a moratorium on such AI developments until a broad scientific consensus ensures the technology can be deployed safely, reflecting concerns over uncontrolled AI leading to catastrophic global consequences such as massive job losses and existential threats. The initiative is spearheaded by the Future of Life Institute and underscores the growing demand for global AI governance to mitigate potential risks associated with advanced AI capabilities. This move highlights the urgency and global relevance of establishing regulatory frameworks to manage AI advancements safely.

Body

The statement signed by Harry and Meghan was organised by the Future of Life Institute, a US-based AI safety group.Photograph: John Angelillo/UPI/ShutterstockView image in fullscreenThe statement signed by Harry and Meghan was organised by the Future of Life Institute, a US-based AI safety group.Photograph: John Angelillo/UPI/ShutterstockHarry and Meghan join AI pioneers in call for ban on superintelligent systemsNobel laureates also sign letter saying ASI technology should be barred until there is consensus that it can be developed ‘safely’The Duke and Duchess of Sussex have joined artificial intelligence pioneers and Nobel laureates in calling for a ban on developingsuperintelligent AI systems.Harry and Meghan are among the signatories of a statement calling for “a prohibition on the development of superintelligence”. Artificial superintelligence (ASI) is the term for AI systems, yet to be developed, that exceed human levels of intelligence at all cognitive tasks.The statement calls for the ban to stay in place until there is “broad scientific consensus” on developing ASI “safely and controllably” and once there is “strong public buy-in”.It has also been signed by the AI pioneer andNobel laureate Geoffrey Hinton, along with his fellow “godfather” of modern AI,Yoshua Bengio; the Apple co-founder Steve Wozniak; the UK entrepreneur Richard Branson; Susan Rice, a former US national security adviser under Barack Obama; the former Irish president Mary Robinson, and the British author and broadcaster Stephen Fry. Other Nobel laureates who signed include Beatrice Fihn, Frank Wilczek, John C Mather, and Daron Acemoğlu.Thestatement, targeted at governments, tech firms and lawmakers, was organised by the Future of Life Institute (FLI), a US-based AI safety group that called for a hiatus in developing powerful AI systems in 2023, soon after theemergence of ChatGPTmade AI a political and public talking point around the world.In July, Mark Zuckerberg, the chief executive of the Facebook parent Meta, one of the big AI developers in the US,said development of superintelligence was “now in sight”. However, some experts have said talk of ASI reflects competitive positioning among tech companies spending hundreds of billions of dollars on AI this year alone, rather than the sector being close to achieving any technical breakthroughs.Nonetheless, FLI says the prospect of ASI being achieved “in the coming decade” carries a host of threats ranging from taking all human jobs to losses of civil liberties, exposing countries to national security risks and even threatening humanity with extinction. Existential fears about AI focus on the potential ability of a system to evade human control and safety guidelines and trigger actions contrary to human interests.FLI released a US national poll showing that approximately three-quarters of Americans want robust regulation on advanced AI, with six out 10 believing that superhuman AI should not be made until it is proven safe or controllable. The survey of 2,000 US adults added that only 5% supported the status quo of fast, unregulated development.skip past newsletter promotionafter newsletter promotionThe leading AI companies in the US, including the ChatGPT developer OpenAI and Google, have made the development of artificial general intelligence – the theoretical state where AI matches human levels of intelligence at most cognitive tasks – an explicit goal of their work. Although this is one notch below ASI, some experts also warn it could carry an existential risk by, for instance, being able to improve itself towards reaching superintelligent levels, while also carrying animplicit threat for the modern labour market.Explore more on these topicsArtificial intelligence (AI)Prince HarryMeghan, the Duchess of SussexComputingStephen FrynewsShareReuse this content