Experts Demand a Time-Out in the AI Race
Information Age
READ
Details
- Date Published
- 29 Mar 2023
- Priority Score
- 5
- Australian
- Yes
- Created
- 28 Oct 2025, 02:51 pm
Description
Warn 'out-of-control' development threatens civilisation.
Summary
The article details a call from major technology figures, including Elon Musk and Steve Wozniak, advocating for a pause in AI development due to potential existential threats. A Future of Life Institute open letter, signed by over 1300 experts, insists on a six-month moratorium on developing AI models more powerful than GPT-4 to address risks like job automation, propagating misinformation, and the emergence of nonhuman minds. This pause would allow for the establishment of safety protocols. This initiative aligns with global efforts to manage AI’s rapid advancement while reflecting Australia's involvement through academic signatories. The discussion contributes significantly to AI safety discourse, highlighting critical governance needs for catastrophic risk mitigation.
Body
Experts demand time-out in AI raceWarn 'out-of-control' development threatens civilisation.By David Braue on Mar 30 2023 12:10 PMPrint articleSome of the biggest names in tech are worried about AI destroying civilisation. Image: ShutterstockElon Musk, Steve Wozniak, and over 1300 academics, tech and business luminaries have signed a Future of Life Institute (FLI) open letter calling for a 6-month freeze on “out-of-control” AI development that, they say, poses “profound risks to society and humanity”.That development has accelerated at a furious rate since last November’s release of GPT-3 – the natural-language generative AI model that is already being used toanswer interview questions,develop malware,write application code,revolutioniseweb browsing,create prize-winning art, bolster productivity suites fromMicrosoftandGoogle, and more.A global race to embrace and improve the technology – and its new successor, the ‘multimodal’GPT-4capable ofanalysing imagesusing techniques that emulate significantly improved deductive reasoning – has fuelled unchecked investment in the technology so quickly, the FLI letter warns, that adoption of “human-competitive” AI is now advancing without consideration of its long-term implications.Those implications, according to theletter, include the potential to “flood our information channels with propaganda and untruth”; automation of “all the jobs”; “loss of control of our civilisation”; and development of “nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us”.To stave off such AI-driven annihilation, the letter calls for a “public and verifiable” six-month hiatus on development of AI models more powerful than GPT-4 – or, in the absence of a rapid pause, a government-enforced moratorium on AI development.“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development [to] ensure that systems adhering to them are safe beyond a reasonable doubt,” the letter argues.The letter is not calling for a complete pause on AI development, FLI notes, but a “stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”“AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”Tech giants all but absentThe letter comes less than a year after Google AI researcher Blake Lemoine wasput on administrative leavefor claiming Google’s ownLaMDAAI engine had become so advanced that it was sentient – a claim that Google’s ethicists and technologists flat-out rejected.Lemoine is not listed among the signatories to the FLI open letter, but many share the responsibility for AI development’s breakneck pace, with Musk – one of theoriginal co-foundersof GPT-3 creator OpenAI – recently reported to havepitched AI researchersabout developing an alternative non-“woke” platform with fewer restrictions on the creation of offensive content.The list of signatories – which has been paused to allow vetting processes to catch up amidst high demand – includes executives at content-based companies such as Pinterest and Getty Images, as well as AI and robotics thinktanks including the Center for Humane Technology, Cambridge Centre for the Study of Existential Risk, Edmond and Lily Safra Center for Ethics, UC Berkeley Center for Human-Compatible AI, Unanimous AI, and more.Australian signatories include Western Sydney University professor of mathematics Andrew Francis; Melbourne University professors Andrew Robinson and David Balding and neuroscience research fellow Colin G Hales; UNSW scientia professor Robert Brooks; University of Queensland honorary professor Joachim Diederich; University of Sydney law professor Kimberlee Weatherall; and others.Tech giants such as Meta, whichrecently closedits Responsible Innovation team after one year, are all but absent from the list – which features no Apple, Twitter, or Instagram employees, only one employee of Meta, three Google researchers and software engineers, and three employees of Google AI subsidiary DeepMind.The letter isn’t the first time FLI has warned about the risks of AI, with previous open letters warning aboutlethal autonomous weapons, the importance of guidingAI Principles, and the need toprioritise researchon “robust and beneficial” AI.David BraueDavid Braue is an award-winning technology journalist who has covered Australia’s technology industry since 1995. A lifelong technophile, he has written and edited content for a broad range of audiences across myriad consumer and business topics, with a particular focus on managing the intersection of technological innovation and business transformation. He has twice won Best IT Journalist at the Australian IT Journalism awards, and was named Best Technology Journalist at the 2024 Australian Technologies Competition.Copyright © Information Age, ACSTags:artificial intelligenceaielon muskfuture of life institute