Back to Articles
AI Hallucinations Cause Artificial Intelligence to Falsely Describe Individuals as Criminals

ABC News

SKIPPED

Details

Date Published
3 Nov 2024
Priority Score
3
Australian
Yes
Created
8 Mar 2025, 12:37 pm

Authors (2)

Description

Unprecedented legal battles are testing if parent companies of tools like ChatGPT can be liable for defamation when innocent people are incorrectly described as criminals.

Summary

The article explores the legal implications of 'AI hallucinations', where artificial intelligence erroneously portrays individuals as criminals. Highlighting cases from Germany and Australia, it examines the challenges faced by individuals like Martin Bernklau and Brian Hood in seeking redress against companies like Microsoft and OpenAI. These incidents underscore the potential for AI-generated misinformation to cause significant reputational damage, and the intricate legal challenges involved, particularly in defamation cases. This discourse is crucial to formulating effective governance frameworks and legal regulations to mitigate risks associated with AI errors globally, including in Australia.

Body

AI hallucinations caused artificial intelligence to falsely describe these people as criminalsByAnna Kelsey-SuggandDamien CarrickforLaw ReportABC Radio NationalTopic:Artificial IntelligenceSun 3 NovSunday 3 NovemberSun 3 Nov 2024 at 8:00pmAI tools' mistakes are nothing new, but defamation cases in response to them certainly are.(Unsplash: Lucian Novosel)German journalist Martin Bernklau made a shocking discovery earlier this year when he typed his name into Microsoft's AI tool, Copilot."I read there that I was a 54-year-old child molester," he tells ABC Radio National'sLaw Report.The AI information said Bernklau had confessed to the crime and was remorseful.But that's not all.Microsoft's AI tool also described him as an escapee from a psychiatric institution, a con-man who preyed on widowers, a drug dealer and a violent criminal."They were all court cases I wrote about," Bernklau says.The tool had conflated Bernklau's news reporting with his personal experience and it presented him as the perpetrator of the crimes he'd reported on.It also published his real address and phone number, and a route planner to reach his home from any location.When AI tools produce false results, it's known as an "AI hallucination".Bernklau isn't the first to experience one. But his story is at the forefront of how the law and AI intersect.And right now, it's all pretty messy.To take Copilot to court or notWhen Bernklau found the hallucinations about him, he wrote to the prosecutor in Tübingen, the German city where he's based, as well as the region's data protection officer. For weeks, neither responded, so he decided to go public with his case.TV news stations and the local newspaper ran the story, and Bernklau hired a lawyer who wrote a cease-and-desist demand."But there was no reaction by Microsoft," he says.He's now unsure of what to do next.His lawyer has advised that if he takes legal action, it could take years for the case to get to court and the process would be very expensive, with potentially no positive result for him.In the meantime, he says his name is now completely blocked and unsearchable on Copilot, as well as other AI tools such as ChatGPT.Bernklau believes the platforms have taken that action because they're not able to extract the false information from the AI model.AI sued for defamationIn Australia, another AI hallucination impacted the mayor of regional Victoria's Hepburn Shire Council,Brian Hood, who was wrongly described by ChatGPTas a convicted criminal.Councillor Hood is in fact a highly respected whistleblower who discovered criminal wrongdoing at a subsidiary of the Reserve Bank of Australia.He launched legal action against OpenAI, the maker of ChatGPT, which he later dropped because of the enormous cost involved.If he'd gone through with suing OpenAI for defamation, Councillor Hood may have been the first person in the world to do so.'Not an issue that can be easily corrected'In the US, a similar action is currently proceeding.It involves a US radio host, Mark Walters, who ChatGPT incorrectly claimed was being sued by a former workplace for embezzlement and fraud. Walters is now suing OpenAI in response."He was not involved in the case … in any way," says Simon Thorne, a senior lecturer in computer science at Cardiff School of Technologies, who has been following the embezzlement case.Mr Walters' legal case is now up and running, and Dr Thorne is very interested to see how it plays out — and what liability OpenAI is found to have."It could be a landmark case, because one imagines that there are many, many examples of this," he says."I think they're just waiting to be discovered."But when they are, there may not be a satisfying resolution."[Hallucinations are] not an issue that can be easily corrected," Dr Thorne says."It's essentially baked into how the whole system works."There's this opaqueness to it ... We can't work out exactly how that conclusion was reached by ChatGPT. All we can do is notice the outcome."Could AI be used in court?AI doesn't only feature in complaints. It's also used bylawyers.AI is increasingly used to generate legal documents like witness or character statements.Victorian lawyer Catherine Terry is heading a Victorian Law Reform Commission inquiry into the use of AI in Victoria's courts and tribunals.But Ms Terry says there's a risk of undermining "the voice of the person", an important element of court evidence."It may not always be clear to courts that AI has been used … and that's another thing courts will need to grapple with as they start seeing AI being used in statements before the court," she says.Queensland and Victorian state courtshave issued guidelines requiring that they be informed if lawyers are relying on AI in any information they present in a case.But in future, courts could be using AI, too."AI could be used for efficiency in case management [or] translation," Ms Terry says."In India, for example, the Supreme Court translates the hearings into nine different local languages."AI could also be used inalternative dispute resolution online.It's further fuel for clear legal regulations around AI — for those using it accurately as well as those impacted by its mistakes."AI is really complicated and multi-layered, and even experts can struggle to understand and explain how it's used," Ms Terry says.Ms Terry welcomessubmissions to the Law Reform's AI inquiry, to help increase clarity and safety around AI in the legal system.Want to go beyond the news cycle?Get a weekly dose of art, books, history, culture, technology, politics and more with the ABC Radio National newsletterYour information is being handled in accordance with theABC Privacy Collection Statement.Email addressSubscribe