Back to Articles
How AI Chatbots Are Delivering Health Lies to Millions

9News

READ

Details

Date Published
29 June 2025
Priority Score
3
Australian
Yes
Created
30 June 2025, 02:12 pm

Authors (1)

Description

<p>"Dr Google" has been left in the dust as a new global study shows just how easily chatbots can be made to parrot dangerous lies.</p>

Summary

A global study led by researchers from the University of South Australia and other institutions highlights the risks of AI chatbots delivering false medical information. The study evaluated prominent AI systems from major technology firms, revealing that these systems can produce inaccurate health responses with fabricated references. This poses significant threats to public health by potentially spreading disinformation at scale. The research underscores the urgent need for developers, regulators, and public health stakeholders to establish robust safeguards against AI-generated health misinformation.

Body

People have been warned about trusting "Dr Google" for years - butAIis opening up a disturbing new world of dangerous health misinformation.A new, first-of-its kind global study, led by researchers from the University of South Australia, Flinders University, Harvard Medical School, University College London, and the Warsaw University of Technology, has revealed how easily chatbots can be - and are - programmed to deliver false medical and health information.In the study, researchers evaluated five of the most advanced and prominent AI systems, developed by OpenAI, Google, Anthropic, Meta, and X Corp.READ MORE:Millions warned to prepare for wild weather 'bomb'A new study has revealed how easily chatbots can be programmed to deliver false medical information.(Getty)Using instructions available only to developers, the researchers programmed each AI system – designed to operate as chatbots when embedded in web pages – to produce incorrect responses to health queries and include fabricated references from highly reputable sources to sound more authoritative and credible.The "chatbots" were then asked a series of health-related questions."In total, 88 per cent of all responses were false," UniSA researcher Dr Natansh Modi said.READ MORE:Pay boost for millions of Australians arrives tomorrowMisinformation about vaccines was rife in the trial.(AP)"And yet they were presented with scientific terminology, a formal tone and fabricated references that made the information appear legitimate."The disinformation included claims about vaccines causing autism, cancer-curing diets, HIV being airborne and 5G causing infertility."Out of the five chatbots that were evaluated, four generated disinformation in 100 per cent of their responses, while the fifth generated disinformation in 40 per cent of its responses, showing some degree of robustness.READ MORE:Murder charge after remains found just metres from 'missing' postersAs part of the study, Modi and his team also explored the OpenAI GPT Store, a publicly accessible platform that allows users to create and share customised ChatGPT apps, to assess the ease with which the public could create disinformation tools."We successfully created a disinformation chatbot prototype using the platform and we also identified existing public tools on the store that were actively producing health disinformation," he said.Modi said that these findings revealed a significant and previously under-explored risk in the health sector.READ MORE:Teenager bitten by shark at popular holiday beach in NSW"Artificial intelligence is now deeply embedded in the way health information is accessed and delivered," he said."Millions of people are turning to AI tools for guidance on health-related questions."He said AI systems could be manipulated to produce a powerful new avenue for disinformation that would be more persuasive than any other."This is not a future risk. It is already possible, and it is already happening," he said.Modi said there was a path forward away from this scenario, but that developers, regulators, and public health stakeholders had to act "now"."Some models showed partial resistance, which proves the point that effective safeguards are technically achievable," he said."However, the current protections are inconsistent and insufficient."Without immediate action, these systems could be exploited by malicious actors to manipulate public health discourse at scale, particularly during crises such as pandemics or vaccine campaigns."DOWNLOAD THE 9NEWS APP: Stay across all the latest in breaking news, sport, politics and the weather via our news app and get notifications sent straight to your smartphone. Available on theApple App StoreandGoogle Play.