Why Is ChatGPT Trying to Gaslight Me?
The Sydney Morning Herald
SKIPPED
Details
- Date Published
- 22 Mar 2024
- Priority Score
- 3
- Australian
- Yes
- Created
- 8 Mar 2025, 02:41 pm
Description
AI chatbots may be sophisticated and slick, but they’re far from being a source of reliable information.
Summary
The article explores the limitations of AI chatbots, specifically pertaining to their ability to provide reliable information. The author shares an experience where a chatbot produced factually incorrect information and then appeared to deny its inaccuracies, an occurrence that underscores the challenges associated with the language models' inherent design. Lingqiao Liu from the University of Adelaide provides expert commentary, noting that large language models function based on probabilistic data patterns rather than factual veracity, leading to errors known as 'hallucinations'. The article suggests that while AI can mimic human-like interaction, it is not capable of true understanding, highlighting the ongoing efforts among developers to enhance the accuracy of AI outputs through better data practices and verification protocols.
Body
March 22, 2024 — 5.01amSaveLog in,registerorsubscribeto save articles for later.Save articles for laterAdd articles to your saved list and come back to them any time.Got itNormal text sizeLarger text sizeVery large text sizeWe have been told by management that we should make use of AI at work to “improve efficiency”. I was initially reluctant, but a friend told me I’d be surprised by how useful chatbots could be with certain tasks, so I decided to give it a go.To begin, I was impressed. Then I asked a question related to something quite specific to my job and area of expertise. The answer included several patent inaccuracies. I pointed this out inside the chat, and then a bizarre conversation ensued. The AI denied that it was wrong and when I politely explained its error, it attempted to gaslight me. Did I just get lied to by an AI language model?AI chatbots may be sophisticated and slick, but they’re far from being a source of reliable information.Credit:API think what you experienced is pretty common. Not just the positive initial impression followed by the realisation that the dazzling fluency and responsiveness sometimes belies major weaknesses, but also being confronted with what seems like deception.I asked Dr Lingqiao Liu, an Associate Professor in the University of Adelaide’s School of Computer Science and an Academic Member of the Australian Institute for Machine Learning, about how these mistakes occur.“Large language models [LLMs] are powerful tools that have demonstrated remarkable abilities in generating human-like text. However, like all AI technologies, they have limitations. One challenge with LLMs is ensuring factual accuracy,” he says.Loading“By design, these models are not repositories of truth but rather pattern-recognising systems that generate responses based on probabilities derived from vast datasets. While they can mimic the style and structure of factual discourse, the content generated is inherently probabilistic, not guaranteed to be true. In the research community, factually incorrect or nonsensical responses from an LLM are often called ‘hallucinations’.”Liu says that developers and researchers are working on methods to improve the veracity of information provided by LLMs. “This includes refining training datasets, implementing fact-checking mechanisms, and developing protocols that enable models to source from and cite up-to-date and reliable information.”In my own experience, I’ve found some of the assistants underpinned by these LLMs to be quite useful in answering questions that might take several – or even dozens of – traditional browser searches. But, like you, I’ve noticed inaccuracies, often followed by weird evasions.AdvertisementI once asked a chatbot about the derivation of a corporate buzzword and the response included the phrase “My former colleague, Lucy Kellaway, said that …” This seemed really odd.After a quick dig around it became clear that the AI assistant had taken the phrase from a Guardian article and used it in their response to me – not as a quote, but as if Lucy Kellaway were literally their peer. When I asked about it, the response included almost comical prevarication.It’s not really deliberately lying or knowingly gaslighting. It’s just sophisticated mimicry.So, did the chatbot lie to you (and me) and then try to gaslight us? Well, it may feel like that. I remember feeling shocked at what seemed to me like plagiarism and outraged at the mealy-mouthed explanations.But, asMelissa Heikkilä recently wrote in the MIT Technology Review, we should be careful about how much agency we assume AI has: “Even the name of the technology, artificial intelligence, is tragically misleading. Language models appear smart because they generate humanlike prose by predicting the next word in a sentence. The technology is not truly intelligent, and calling it that subtly shifts our expectations, so we treat the technology as more capable than it really is.”It’s not really deliberately lying or knowingly gaslighting. It’s just sophisticated mimicry. At least at this stage of the technology’s development.Send your questions to Work Therapy by emailingjonathan@theinkbureau.com.au.The Business Briefing newsletter delivers major stories, exclusive coverage and expert opinion.Sign up to get it every weekday morning.SaveLog in,registerorsubscribeto save articles for later.License this articleWork therapyAIOpinionJonathan Rivettis a writer based in Melbourne. He's written about workplace culture and careers for more than a decade.Loading