Back to Articles
AI Heavyweights Call for End to 'Superintelligence' Research

YourLifeChoices

SKIPPED

Details

Date Published
22 Oct 2025
Priority Score
5
Australian
Yes
Created
26 Oct 2025, 11:01 am

Authors (1)

Description

Flavio Coelho / Getty Images

Summary

The article reports on a public statement issued by the Future of Life Institute, calling for a global prohibition on superintelligence research until it can be proven safe and controllable. Signed by notable figures across various fields, the statement emphasizes the existential risks posed by superintelligent AI systems that could operate beyond human control. The article highlights the broad coalition of AI pioneers, business leaders, and political figures supporting this call, reflecting a growing concern over the unchecked development of AI. It underscores the argument that while AI holds potential for incredible technological advancements, such pursuits should not come at the cost of human control and safety.

Body

Flavio Coelho / Getty ImagesMary-Anne Williams,UNSW SydneyI have worked in AI for more than three decades, including with pioneers such asJohn McCarthy, whocoined the term“artificial intelligence” in 1955.In the past few years, scientific breakthroughs have produced AI tools that promise unprecedented advances inmedicine,science,businessandeducation.At the same time, leading AI companies have the stated goal to createsuperintelligence: not merely smarter tools, but AI systems thatsignificantly outperform all humanson essentially all cognitive tasks.Superintelligence isn’t just hype. It’s a strategic goal determined by a privileged few, and backed byhundreds of billions of dollars in investment, business incentives, frontier AI technology, and some of the world’s best researchers.What was once science fiction has become a concrete engineering goal for the coming decade. In response, I and hundreds of other scientists, global leaders and public figures have put our names to apublic statementcalling for superintelligence research to stop.What the statement saysThe new statement, released today by the AI safety nonprofitFuture of Life Institute, is not a call for a temporary pause,as we saw in 2023. It is a short, unequivocal call for a global ban:We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.The list of signatories represents a remarkably broad coalition, bridging divides that few other issues can. The “godfathers” of modern AI are present, such asYoshua BengioandGeoff Hinton. So are leading safety researchers such as UC Berkeley’sStuart Russell.But the concern has broken free of academic circles. The list includes tech and business leaders such as Apple cofounder Steve Wozniak and Virgin’s Richard Branson. It includes high-level political and military figures from both sides of US politics, such as former National Security Advisor Susan Rice and former chairman of the Joint Chiefs of Staff Mike Mullen. It also includes prominent media figures such as Glenn Beck and former Trump strategist Steve Bannon, together with artists such as Will.I.am and respected historians such as Yuval Noah Harari.Why superintelligence poses a unique challengeHuman intelligence has reshaped the planet in profound ways. We have rerouted rivers to generate electricity and irrigate farmland, transforming entire ecosystems. We have webbed the globe with financial markets, supply chains, air traffic systems: enormous feats of coordination that depend on our ability to reason, predict, plan, innovate and build technology.Superintelligence could extend this trajectory, but with a crucial difference. People will no longer be in control.The danger is not so much a machine that wants to destroy us, but one that pursues its goals with superhuman competence and indifference to our needs.Imagine a superintelligent agent tasked with ending climate change. It might logically decide to eliminate the species that’s producing greenhouse gases.Instruct it to maximise human happiness, and it might find a way to trap every human brain in a perpetual dopamine loop. Or, in Swedish philosopher Nick Bostrom’sfamous example, a superintelligence tasked with producing as many paperclips as possible might try to convert all of Earth’s matter, including us, into raw material for its factories.The issue is not malice but mismatch: a system that understands its instructions too literally, with the power to act cleverly and swiftly.History shows what can go wrong when our systems grow beyond our capacity to predict, contain or control them.The 2008 financial crisis began withfinancial instrumentsso intricate that even their creators could not foresee how they would interact until the entire system collapsed. Cane toadsintroduced in Australia to fight pestshave instead devastated native species. The COVID pandemic exposed howglobal travel networkscan turn local outbreaks into worldwide crises.Now we stand on the verge of creating something far more complex: a mind that can rewrite its own code, redesign and achieve its goals, and out-think every human combined.A history of inadequate governanceFor years, efforts to manage AI have focused on risks such as algorithmic bias, data privacy, and the impact of automation on jobs.These are important issues. But they fail to address the systemic risks of creating superintelligent autonomous agents. The focus has been on applications, not the ultimate stated goal of AI companies to create superintelligence.The new statement on superintelligence aims to start a global conversation not just on specific AI tools, but on the very destination AI developers are steering us toward.The goal of AI should be about creating powerful tools to serve humanity. This does not mean autonomous superintelligent agents that can operate beyond human control without aligning with human well-being.We can have a future of AI-powered medical breakthroughs, scientific discovery, and personalised education. None of these require us to build an uncontrollable superintelligence that could unilaterally decide the fate of humanity.Mary-Anne Williams, Michael J Crouch Chair in Innovation, School of Management and Governance,UNSW SydneyThis article is republished fromThe Conversationunder a Creative Commons license. Read theoriginal article.Ready for the forum?Our forums are buzzing with smart, friendly voices—but you’ll need to register before joining the chat.This guide shows you how. Want a shortcut? Get our app onGoogle Playor theApp Store, where signing in and diving into a forum chat is as quick as opening your phone.What did you think of this article?Give us a thumbs up or a thumbs down!Like0Dislike0Also Read:'Everyday living cost' or assistive technology? NDIS participants frustrated with flexibility of rulesShare onFacebookShare viaEmailSharing is caringThe ConversationThe Conversation Australia and New Zealand is a unique collaboration between academics and journalists that is the world’s leading publisher of research-based news and analysis.LEAVE A REPLYCancel replyJoin the conversationNews, deals, games, and bargains for Aussies over 60. From everyday expenses like groceries and eating out, to electronics, fashion and travel, the club is all about helping you make your money go further.RegisterMember LoginOr continue usingFacebookGoogleAppleDON'T MISSAustralian tech powerhouse faces AFP raid over alleged share trading29 October 2025‘Fawlty Towers’ star dies at 93, leaving behind a legacy of laughter and love29 October 2025$166 fine for a simple driving habit leaves one motorist ‘unlucky’29 October 2025This was the best way to invest $1,000 … back in 201028 October 2025Load more- Advertisment -- Advertisment -