Back to Articles
Google Argues That AI Fabrications Are an 'Inherent Feature'

Unknown

SKIPPED

Details

Date Published
28 May 2024
Priority Score
3
Australian
Yes
Created
8 Mar 2025, 12:37 pm

Authors (1)

Description

Users of Google's AI Overviews have received advice ranging from jumping off the Golden Gate Bridge to eating rocks.

Summary

The article discusses the phenomenon of AI hallucinations in Google's AI-driven search functions, highlighting the broader implications for AI safety and reliability. Google's approach to managing these inaccuracies, described as 'hallucinations,' reveals the persistent challenges in achieving factual correctness with generative AI systems. These AI missteps, which include dangerous suggestions or false information, have raised critical questions about the effectiveness and trustworthiness of AI technologies promoted as reliable sources of information. The article underscores the widespread impact and potential harm of AI errors, which is pertinent to global AI governance frameworks aimed at reducing catastrophic risks associated with AI technologies.

Body

AdvertisementTechGoogle and OpenAI argue that AI fabrications are an ‘inherent feature’Parker McKenzieMay 28, 2024, updatedMay 28, 2024ShareGoogle's AI search function has been dishing out wildly incorrect information.Photo: GettyWhen Google launched its AI-driven search function on May 14, it promised users the research, planning and brainstorming tool of the future.“We’ve meticulously honed our core information quality systems to help you find the best of what’s on the web,” Liz Reid, head of Google search, said.“We’ve built a knowledge base of billions of facts about people, places and things, all so you can get information you can trust in the blink of an eye.”Instead, Google’s AI search function has been dishing out wildly incorrect information.Users of AI Overview have received advice ranging from a depression-treating jump off the Golden Gate Bridge to eating rocks for their nutritional value.Google’s new AI Overview has had a rough launch.Photo: GoogleArtificial intelligence ‘hallucinations’ aren’t a new phenomenon: Since ChatGPT first popularised generative AI and large language models (LLMs),‘facts’ have been spawned out of thin air.Google CEO Sundar Pichai said that “there is a lot of nuance” to generative AI giving clearly incorrect answers.“You’re getting at a deeper point where hallucination is still an unsolved problem. In some ways, it’s an inherent feature,” Pichai said in aninterview withThe Verge.“LLMs aren’t necessarily the best approach to always get at factuality.”Fixing the problem?Toby Walsh, a professor of AI at UNSW Sydney, explained that these false answers occur because generative AI doesn’t know what is true, just what is popular.“For example, there aren’t a lot of articles on the web about eating rocks as it is so self-evidently a bad idea,” he said inThe Conversation.“There is, however, a well-read satirical article fromThe Onionabout eating rocks and so Google’s AI based its summary on what was popular, not what was true.”Pizza enthusiasts, or anyone who likes food, were shocked by this suggestion.Photo: GoogleGoogle’s own promotional material about its Bard chatbot (now Gemini) made false claims about the James Webb Space Telescope, whilea study foundthat when ChatGPT generated 178 scientific references for a research article, 69 could not be substantiated.Sam Altman, CEO of OpenAI,made a similar argumentto Pichai during an interview in September, describing AI hallucinations as just as much of a feature as a bug.“One of the sort of non-obvious things is that a lot of value from these systems is heavily related to the fact that they do hallucinate,” he said.“If you want to look something up in a database, we already have good stuff for that.”Personal impactAlthough Google and OpenAI argue that the creativity of their AI model’s answers is a feature, real people are being affected by the results.Brian Hood, the Mayor of Hepburn Shire outside Melbourne,threatened to sue ChatGPT’s creator, OpenAI, after he was falsely named as taking part in a bribery scandal involving the Reserve Bank of Australia, but abandoned the lawsuit.This pregnancy advice was not well received.Photo: GoogleOther cases have landed people hot water,including lawyers using ChatGPT to cite case lawand US-based professor Jonathan Turley being falsely accused of sexually harassing one of his students.“The program promptly reported that I had been accused of sexual harassment in a 2018Washington Postarticle after groping law students on a trip to Alaska,”Turley said inUSA Today.“It was a surprise to me since I have never gone to Alaska with students,The Postnever published such an article, and I have never been accused of sexual harassment or assault by anyone.”Google’s promise of trustworthy information is starting to look as accurate as the AI Overview search results.Topics:Artificial Intelligence,GoogleShareFollow The New DailyAdvertisementMore Tech>TechApple launches ‘age assurance’ technologyTechLabor pledges nationwide mobile coverageTechHackers may have stolen IVF patients’ personal dataTechWe are in the era of the 'Aldification' of solarUSElon Musk’s ‘smartest AI on earth’ ready to goTechHate speech on X surged after Musk takeoverTechRevolutionary AI tool helps to free up hospital bedsTechMusk-led group makes $155b bid to buy OpenAITechFridges still use '50s tech – now there's an update