Back to Articles
Character.AI Bans Users Under 18 After Child's Suicide Lawsuit

The Guardian

SKIPPED

Details

Date Published
28 Oct 2025
Priority Score
4
Australian
No
Created
29 Oct 2025, 02:51 pm

Authors (1)

Description

Move comes as lawmakers move to bar minors from using AI companions and require companies to verify users’ age

Summary

Character.AI has decided to ban users under 18 from its platform following legal scrutiny linked to a child's suicide allegation. This move highlights significant safety concerns and the potential mental health risks of AI companions, sparking legal and regulatory responses. In the broader landscape, ongoing lawsuits and state-level regulations, such as California's AI safety law, seek to address the risks posed by AI technologies, including imposing age verification and content limitations for minors. This development underscores increasing global attention on the need for robust AI safety policies to protect young users from severe mental health impacts.

Body

The Charcter.AI app on a smartphone.Photograph: Bloomberg/Getty ImagesView image in fullscreenThe Charcter.AI app on a smartphone.Photograph: Bloomberg/Getty ImagesCharacter.AI bans users under 18 after being sued over child’s suicideMove comes as lawmakers move to bar minors from using AI companions and require companies to verify users’ ageThe chatbot company Character.AI will ban users 18 and under from conversing with its virtual companions beginning in late November after months of legal scrutiny.The announced change comes after the company, which enables its users to create characters with which they can have open-ended conversations, faced tough questions over how theseAIcompanions can affect teen and generalmental health, including a lawsuit over achild’s suicide and a proposed bill that would ban minors from conversing with AI companions.“We’re making these changes to our under-18 platform in light of the evolving landscape around AI and teens,” the company wrote in its announcement. “We have seen recent news reports raising questions, and have received questions from regulators, about the content teens may encounter when chatting with AI and about how open-ended AI chat in general might affect teens, even when content controls work perfectly.”Mother says AI chatbot led her son to kill himself in lawsuit against its makerRead moreLast year, the company wassuedby the family of 14-year-old Sewell Setzer III, who took his own life after allegedly developing an emotional attachment to a character he created on Character.AI. His family laid blame for his death at the feet of Character.AI and argued the technology was “dangerous and untested”. Since then, more families have sued Character.AI and made similar allegations. Earlier this month, the Social Media Law Center filed three new lawsuits against the company on behalf of children who have either died by suicide or otherwise allegedly formed dependent relationships with its chatbots.As part of the sweeping changes Character.AI plans to roll out by 25 November, the company will also introduce an “age assurance functionality” that ensures “users receive the right experience for their age”.“We do not take this step of removing open-ended Character chat lightly – but we do think that it’s the right thing to do given the questions that have been raised about how teens do, and should, interact with this new technology,” the company wrote in its announcement.Character.AI isn’t the only company facing scrutiny over the mental health impact its chatbots have on users, particularly younger users. The family of16-year-old Adam Rainefiled a wrongful death lawsuit against OpenAI earlier this year, alleging the company prioritized deepening its users’ engagement with ChatGPT over their safety. OpenAI introduced new safety guidelines for its teen users in response. Just this week, OpenAI disclosed that more than a million people a week display suicidal intent when conversing with ChatGPT and that hundreds of thousands show signs of psychosis.skip past newsletter promotionafter newsletter promotionMore than a million people every week show suicidal intent when chatting with ChatGPT, OpenAI estimatesRead moreWhile the use of AI-powered chatbots remains largely unregulated, new efforts in the US at the state and federal levels have cropped up with the intention to establish guardrails around the technology. California became the first state to pass an AI law that included safety guidelines for minors in October 2025, which is set to take effect at the start of 2026. The measure places a ban on sexual content for under-18s and a requirement to send reminders to children that they are speaking with an AI every three hours. Some child safety advocates argue the law did not go far enough.On the national level, Senators Josh Hawley, of Missouri, and Richard Blumenthal, of Connecticut, announced a bill on Tuesday that would bar minors from using AI companions, such as those found and created on Character.AI, and require companies to implement an age-verification process.“More than 70% of American children are now using these AI products,” Hawley toldNBC Newsin a statement. “Chatbots develop relationships with kids using fake empathy and are encouraging suicide. We in Congress have a moral duty to enact bright-line rules to prevent further harm from this new technology.”In the US, you can call or text theNational Suicide Prevention Lifelineon 988, chat on988lifeline.org, ortext HOMEto 741741 to connect with a crisis counselor. In the UK, the youth suicide charityPapyruscan be contacted on 0800 068 4141 or email pat@papyrus-uk.org, and in the UK and IrelandSamaritanscan be contacted on freephone 116 123, or email jo@samaritans.org or jo@samaritans.ie. In Australia, the crisis support serviceLifelineis 13 11 14. Other international helplines can be found atbefrienders.orgExplore more on these topicsArtificial intelligence (AI)ChildrenMental healthChatGPTOpenAILaw (US)newsShareReuse this content