‘Dangerous Nonsense’: AI-Authored Books on ADHD for Sale on Amazon
The Guardian
SKIPPED
Details
- Date Published
- 3 May 2025
- Priority Score
- 3
- Australian
- No
- Created
- 5 May 2025, 05:43 pm
Description
Experts say online retailer has ethical responsibility to guard against chatbot-generated work on sensitive topics
Summary
The article highlights a concerning trend of AI-generated books on Amazon that provide misinformation on sensitive topics such as ADHD. These books, allegedly produced by AI models like ChatGPT, are criticized for their potential to offer misleading or harmful advice due to the lack of proper vetting and regulation. Experts express concern over the ethical responsibilities of online marketplaces in preventing the dissemination of such content, likening the current regulatory environment to a 'wild west' scenario with insufficient accountability for both the creators and platforms hosting this material. This situation underscores significant gaps in current AI governance and points to the need for stringent regulatory frameworks to mitigate risks associated with the spread of inaccurate AI-generated content, raising alarms at both global and local levels regarding AI safety and public trust in AI systems.
Body
Richard Wordsworth was given a book bought off Amazon after receiving an adult ADHD diagnosis.Photograph: Martin Godwin/The GuardianView image in fullscreenRichard Wordsworth was given a book bought off Amazon after receiving an adult ADHD diagnosis.Photograph: Martin Godwin/The Guardian‘Dangerous nonsense’: AI-authored books about ADHD for sale on AmazonExperts say online retailer has ethical responsibility to guard against chatbot-generated work on sensitive topicsAmazon is selling books marketed at people seeking techniques to manage their ADHD that claim to offer expert advice yet appear to be authored by a chatbot such as ChatGPT.Amazon’s marketplace has been deluged with works produced by artificial intelligence that are easy and cheap to publish but include unhelpful or dangerous misinformation, such asshoddy travel guidebooksand mushroom foraging books thatencourage risky tasting.A number of books have appeared on the online retailer’s site offering guides to ADHD that also seem to be written by chatbots. The titles includeNavigating ADHD in Men: Thriving with a Late Diagnosis,Men with Adult ADHD: Highly Effective Techniques for Mastering Focus, Time Management and Overcoming Anxietyand Men with Adult ADHD Diet & Fitness.Samples from eight books were examined for the Guardian by Originality.ai, a US company that detects content produced by artificial intelligence. The company said each had a rating of 100% on its AI detection score, meaning its systems were highly confident that the books were written by a chatbot.Experts said online marketplaces were a “wild west” owing to the lack of regulation around AI-produced work – and dangerous misinformation risked spreading as a result.Michael Cook, a computer science researcher at King’s College London, said generative AI systems were known to give dangerous advice, for example around ingesting toxic substances, mixing together dangerous chemicals or ignoring health guidelines.As such, it was “frustrating and depressing to see AI-authored books increasingly popping up on digital marketplaces” particularly on health and medical topics, which could result in misdiagnosis or worsen conditions, he said.“Generative AI systems like ChatGPT may have been trained on a lot of medical textbooks and articles, but they’ve also been trained on pseudoscience, conspiracy theories and fiction,” said Cook.“They also can’t be relied on to critically analyse or reliably reproduce the knowledge they’ve previously read – it’s not as simple as having the AI ‘remember’ things that they’ve seen in their training data. Generative AI systems should not be allowed to deal with sensitive or dangerous topics without the oversight of an expert,” he added.Yet Cook noted Amazon’s business model incentivised this type of practice, as it made “money every time” people bought a book, whether the work was “trustworthy or not”, while the generative AI companies that created the products were not held accountable.Prof Shannon Vallor, the director of the University of Edinburgh’s Centre for Technomoral Futures, said Amazon had “an ethical responsibility to not knowingly facilitate harm to their customers and to society”, although it would be “absurd” to make a bookseller responsible for the contents of all its books.Problems were arising because the guardrails previously deployed in the publishing industry – such as reputational concerns and the vetting of authors and manuscripts – had been completely transformed by AI, she noted.This was compounded by a “wild west” regulatory environment in which there were no “meaningful consequences for those who enable harms”, fuelling a “race to the bottom”, Vallor said.At present, there is no legislation that requires AI-authored books to be labelled as such. Copyright law only applies if a specific author’s content has been reproduced, although Vallor noted that tort law should impose “basic duties of care and due diligence”.The Advertising Standards Agency said AI-authored books cannot be advertised to give a misleading impression that they were written by a human, enabling people who had seen such books to submit acomplaint.Richard Wordsworth was hoping to learn about his recent adult ADHD diagnosis when his father recommended a book he found on Amazon after searching “ADHD adult men”.When Wordsworth sat down to read it, “immediately, it sounded strange”, he said. The book opened with a quote from theconservative psychologist Jordan Petersenand then contained a string of random anecdotes, as well as historical inaccuracies.Some advice was actively harmful, Wordsworth observed. For example, one chapter discussing emotional dysregulation warned that friends and family did not “forgive the emotional damage you inflict. The pain and hurt caused by impulsive anger leave lasting scars.”When Wordsworth researched the author he spotted a headshot that looked AI-generated, plus a lack of qualifications. He searched several other titles in the Amazon marketplace and was shocked to encounter warnings that his condition was “catastrophic” and that he was “four times more likely to die significantly earlier”.How an embarrassing U-turn exposed a concerning truth about ChatGPT | Chris Stokel-WalkerRead moreHe felt immediately “upset”, as did his father, who is highly educated. “If he can be taken in by this type of book, anyone could be – and so well-meaning and desperate people have their heads filled with dangerous nonsense by profiteering scam artists while Amazon takes its cut,” Wordsworth said.An Amazon spokesperson said: “We have content guidelines governing which books can be listed for sale and we have proactive and reactive methods that help us detect content that violates our guidelines, whether AI-generated or not. We invest significant time and resources to ensure our guidelines are followed and remove books that do not adhere to those guidelines.“We continue to enhance our protections against non-compliant content and our process and guidelines will keep evolving as we see changes in publishing.”Explore more on these topicsArtificial intelligence (AI)AmazonE-commerceInternetAttention deficit hyperactivity disorderMental healthHealthnewsShareReuse this content