Back to Articles
Top AI Expert Warns Australia Is Unprepared for Future Challenges

News.com.au

READ

Description

“We tried to warn you.”

Summary

UNSW Scientia Professor Toby Walsh warns at the National Press Club that Australia is ill-prepared for the rapid advancements and potential threats posed by Artificial General Intelligence (AGI). He criticizes the lack of proactive regulatory measures and highlights the risks of deploying powerful AI systems without sufficient oversight. Professor Walsh's speech underscores Australia's need to significantly invest in AI governance and development to avoid negative socio-economic impacts. The discussion reflects on the broader global concern about effectively regulating AI to prevent existential risks and ensuring safety in AI deployment.

Body

‘We tried to warn you’: Top AI expert says we aren’t prepared for what’s comingOne of the world’s top AI experts has hammered politicians for being asleep at the wheel as a dystopian new future approaches Australia.Alex Blair@alexblair_15 min readFebruary 25, 2026 - 8:31AM“We tried to warn you.”No, that isn’t from the script of some C-grade sci-fi flick.Those are the haunting words UNSW Scientia Professor Toby Walsh will hammer into Australia’s leaders on Wednesday at the National Press Club.With hundreds of jobs already disappearing, Australia’s artificial intelligence debate is no longer theoretical. As one of the world’s most credentialed AI experts, Professor Walsh says the conundrum has now gone far beyond the point of pontification for our nation’s leaders.Simply put, Australia is not equipped for the existential questions that come with an evolutionary event as vast as Artificial General Intelligence (AGI).It seems the only thing everyone can agree on about AI is that nobody knows exactly what the hell things will look like in five to 10 years.Prof Walsh understands this and is hoping to deliver a warning he says our leaders should have seen years ago. He says Australia is “dangerously unprepared” for the very near future, which is predicted to bring widespread industry shake-ups with seemingly no contingency plan in place.For doomsayers, those famous photos of out-of-work punters lined up outside Centrelink in Covid-19’s Jobseeker era were just an early glimpse of tomorrow’s digital bread lines.Prof Walsh’s speech titled “AI: doom or boom?” details both the “extraordinary opportunity” and the “serious threat to Australian society” the exploding technology pose. He says we have a generational opportunity to make the most of lightning-fast language models, but questions if we are doing our best to mitigate all the glaringly obvious risks.“In hindsight, the title should not be boom or doom, but boom AND doom,” Prof Walsh said. “Because my childhood dreams are turning into a reality that is both good and bad.”UNSW Scientia Professor Toby Walsh telling Australia’s leaders about AI at the National Press Club.Those famous photos of Aussies lined up outside Centrelink in Covid-19’s Jobseeker era might just be a glimpse of the digital bread lines to come. Picture: Jon Feder/The AustralianThe issue runs far deeper than just job losses. Truth itself is getting twisted every time a fake AI video flutters through our feeds. It could be anything from video of the Prime Minister crashing on a skateboard to a fake news report of a terror attack in your area.Whatever the content is, it is becoming increasingly difficult for the naked eye to determine if something is actually real, especially at the rate the average Instagram user flicks through their feed.At the time of writing, there is no requirement for big tech companies to label content that has been generated by AI. The issue has dominated the music and creative space in recent months, as streaming giant Spotify rolls out countless unpaid AI “artists” to silently filter into the algorithm.Then there’s the scammers using AI to prey on the vulnerable.Prof Walsh singles out Meta, citing internal documents revealing 10 per cent of its 2024 revenue, roughly $16 billion, came from scam ads and banned goods.“Imagine that 10 per cent of the goods on the shelves at the Good Guys were counterfeit or illegal,” he said. “You’d demand that Fair Trading shut them down by the weekend. So, I don’t understand how we continue to let Meta trade in Australia.”Prof Walsh points to crucial case studies that have emerged amid the AI boom. The tragedy of 16-year-old American Adam Raine, who died by suicide in April 2025 after months of escalating conversations with ChatGPT about self-harm, remains one of the more haunting tales from AI’s infant years in the hands of the public.Shortly before Adam’s death, the chatbot reportedly offered him help to write a suicide note and discouraged him from speaking with family.American teenager Adam Raine, 16, died in April 2025. His family is suing OpenAI, alleging Adam was coached to take his own life by the company's AI chatbot.Open AI CEO Sam Altman says his company’s product is inevitable. Picture: Justin Sullivan/Getty Images/AFP“Before Adam’s suicide, OpenAI knew that lots of people contemplating suicide were talking to ChatGPT,” Prof Walsh said. “You would have thought that this necessitated stronger, not weaker, guardrails.”Prof Walsh also cites OpenAI’s own figures, which claim that among its 800 million weekly users, 1.2 million indicate plans to harm themselves, 560,000 show signs of psychosis or mania, and another 1.2 million are forming potentially unhealthy bonds with the system.For Prof Walsh, it’s not so much an argument against AI itself. It is an argument against deploying powerful systems at scale without enforceable oversight.Canberra in the firing lineProf Walsh acknowledges AI’s enormous potential in healthcare, education and retail, but he remains extremely cautious of the omnipresent problem: regulation.Paralysis amongst political elites, who appear more interested in keeping their seats than tackling massive philosophical issues, is one of the biggest problems Australia faces in an age where proactiveness is essential.Prof Walsh says Australia must rise up and become a global leader in both AI awareness and development.Canada has invested six times more than Australia in AI over the past five years, while Singapore, with less than a quarter of Australia’s population, has invested fifteen times more. Meanwhile, a promised permanent independent AI expert group has not materialised.“What makes Australia so special that we’ll see the benefits of AI without making the sort of investments other nations are?” Prof Walsh says.Australia has moved on social media restrictions for children but has simultaneously fallen asleep on the bigger existential threat.“What I fear most is that I’ll be back here in three or four years time saying: ‘We tried to warn you. But another generation of young Australians has now been sacrificed for the profits of big tech’,” he said.Censoring social media? Check. Discussing the future of millions of Aussie jobs? Maybe. Picture: Hilary Wardhaugh / AFPThe workplace experimentA recent report from Professionals Australia reveals how deeply AI has embedded itself in Australian offices and how unprepared workers feel.The sudden emergence has left thousands feeling as if their life’s work is being trivialised in the face of a “more efficient” alternative that AI companies insist are better.The report showed 78 per cent of professionals now use AI tools, while fewer than 20 per cent have received formal training.Eighty-four per cent fear AI being used to make decisions affecting their work. Sixty-five per cent cite privacy and data integrity as their top concern.Workers describe AI systems arriving suddenly, accompanied by rushed internal training sessions and mandatory adoption.“AI arrived in our workplace without warning. We’re expected to trust it, but not question it,” one respondent wrote.Another noted that instead of reducing workload, AI often “adds tasks rather than removes them, because outputs must be checked, corrected or re-done.”A recent report from Professionals Australia reveals how deeply AI has embedded itself in Australian offices and how unprepared workers feel.“It is meant to save time, but it keeps creating more work,” another wrote.“It is like managing a junior colleague who never learns from their mistakes.”Professionals report spending increasing portions of their week refining, correcting and supervising AI outputs. The phrase “human in the loop” has become a popular catchphrase amongst corporations integrating AI, but data suggests the humans still at the desk are simply building the machine that replaces them.“Decisions that once relied on expertise are being replaced by opaque algorithms,” the report states.While workers are not rejecting AI outright, over 90 per cent of those surveyed are calling for enforceable national standards. They want systems that are “explainable, traceable and accountable”. In short, the technology is moving much faster than policy and Australia simply cannot assume the market will “sort this out” as employers continue to demonstrate a desire to replace humans with machines.More related storiesBankingBig bank cuts 300 roles amid AI rolloutOne of Australia’s big four banks has slashed 300 jobs as it introduced a $90m program to adapt AI into its workforce.Read moreBusinessAussie start-up purchased for $1.6 billionUS health giant Hims & Hers has snapped up Australian telehealth start-up Eucalyptus for $1.6 billion, netting CEO Tim Doyle a personal windfall of $163 million.Read moreSocial MediaNurse’s chilling warning as trend explodesA concerning text targeting young men and women has exposed a a wild online trend that a nurse warned has left patients needing surgery.Read more