‘Very Dangerous’: A Mind Mental Health Expert on Google's AI Overviews
The Guardian
ENRICHED
Details
- Date Published
- 20 Feb 2026
- Priority Score
- 3
- Australian
- Unknown
- Created
- 20 Feb 2026, 07:00 am
Description
Information content manager Rosie Weatherley says harmful inaccuracies are presented as uncontroversial facts
Summary
The article highlights concerns raised by Rosie Weatherley, an information content manager at Mind, about Google's AI Overviews which provide mental health advice. These AI-generated summaries often present complex and nuanced health information as simplistic, definitive facts, potentially harming vulnerable individuals seeking assistance. The issue gains significance as Google's summaries have a wide reach, influencing the perceptions of millions. This case illustrates the broader risks of AI in providing potentially misleading health guidance, emphasizing the need for robust safety and accuracy frameworks in AI implementations. The discussion contributes to the discourse on AI safety by showcasing the challenges in AI deployment related to mental health and the responsibilities of tech giants like Google in mitigating these risks.
Body
Rosie Weatherley says that the way AI-generated overviews flatten information about highly sensitive and nuanced areas into neat answers can be harmful to vulnerable people. Photograph: Jill Mead/The GuardianView image in fullscreenRosie Weatherley says that the way AI-generated overviews flatten information about highly sensitive and nuanced areas into neat answers can be harmful to vulnerable people. Photograph: Jill Mead/The Guardian‘Very dangerous’: a Mind mental health expert on Google’s AI OverviewsInformation content manager Rosie Weatherley says harmful inaccuracies are presented as uncontroversial facts
Mind launches inquiry into AI and mental health after Guardian investigation
A year-long commission has been launched by Mind to examine AI and mental health after a Guardian investigation exposed how Google’s AI Overviews, which are shown to 2 billion people each month, gave people “very dangerous” mental health advice.Here, Rosie Weatherley, information content manager at the largest mental health charity in England and Wales, describes the risks posed to people by the AI-generated summaries, which appear above search results on the world’s most visited website.“Over three decades, Google designed and delivered a search engine where credible and accessible health content could rise to the top of the results.“Searching online for information wasn’t perfect, but it usually worked well. Users had a good chance of clicking through to a credible health website that answered their query.“AI Overviews replaced that richness with a clinical-sounding summary that gives an illusion of definitiveness.“It’s a very seductive swap, but not a responsible one. And this often ends the information-seeking journey prematurely. The user has a half answer, at best.“I set myself and my team of mental health information experts at Mind a task: 20 minutes searching using queries we know people with mental health problems tend to use. None of us needed 20.“Within two minutes, Google had served AI Overviews that assured me starvation was healthy. It told a colleague mental health problems are caused by chemical imbalances in the brain. Another was told that her imagined stalker was real, and a fourth that 60% of benefit claims for mental health conditions are malingering. It should go without saying that none of the above are true.View image in fullscreenRosie Weatherley said that, during a test conducted by Mind experts, Google served false information in AI Overviews, including that starvation is healthy. Photograph: Jill Mead/The Guardian“In each of these examples we are seeing how AI Overviews are flattening information about highly sensitive and nuanced areas into neat answers. And when you take out important context and nuance and present it in the way AI Overviews do, almost anything can seem plausible.“This process is especially harmful for people who are likely to be in some level of distress.“A multi-billion-dollar company like Google that profits from AI Overviews should have more resources dedicated to providing accurate information. The extent of their concern seems limited to reactively retraining or removing AI Overviews when individuals, organisations or indeed journalists flag new insights. This ‘whack-a-mole’ style of problem-solving feels unserious and not scaled to the size and resource of the company profiting from them.“Search engines have evolved to make access to the most harmful search results, like suicide methods, less immediately available. But if you search as an unwell person might search, the risk remains that you will be served harmful inaccuracies and half-truths, presented in calm and confident copy as uncontroversial neutral facts with the stamp of approval from the world’s biggest search engine.“In a search for crisis information, the AI Overview haphazardly collaged various contradictory signposts in long lists.“Perhaps AI has enormous potential to improve lives, but right now, the risks are really worrying. Google will only protect you from the potential faults of AI Overviews when it thinks you’re in acute distress. People need and deserve access to constructive, empathetic, careful and nuanced information at all times.”Explore more on these topicsMental healthHealthGoogle AssistantAI (artificial intelligence)EnglandWalesfeaturesShareReuse this content