AI Chatbots: Character.ai and Vitality Health Ditch Conversational Interfaces Over Liability and Control Concerns
Australian Financial Review
READ
Details
- Date Published
- 11 Nov 2025
- Priority Score
- 2
- Australian
- Yes
- Created
- 16 Nov 2025, 10:14 am
Description
Character.ai is among the companies opting for a little less conversation and a lot more thought about building products that are safer for teens.
Summary
The article highlights the decision of companies like Character.ai and Vitality Health to abandon AI chatbots due to concerns over liability and lack of control. These companies are reevaluating the effectiveness and safety of conversational interfaces, which can be manipulated or 'jailbroken' by users, leading to potentially harmful outcomes. This reveals a growing awareness and caution regarding the deployment of AI technologies, particularly when it involves user interaction that may result in unintended, risky behaviors. The article provides insight into the ongoing debate over the place of chatbots in future AI applications, stressing the need for safer, more controlled AI products. However, the piece does not delve deeply into existential AI risks or contribute significant new insights into AI safety governance frameworks.
Body
TechnologyAIPrint articleNov 13, 2025 – 5.00amSaveLog inorSubscribeto save articleShareCopy linkCopiedEmailLinkedInTwitterFacebookCopy linkCopiedShare via...Gift this articleSubscribe to gift this articleGift 5 articles to anyone you choose each month when you subscribe.Subscribe nowAlready a subscriber?LoginFor three years, chatbots have been the face of generative artificial intelligence. Type anything in them to get a personalised response, which morphs into a seemingly magical dialogue with a machine. While that conversational interface may seem the best way to harness large language models, some companies are starting to ditch chatbots, worried about liability and loss of control.They have found that even with guardrails, users can “jailbreak” the technology and get a chatbot to go off topic, sometimes inharmful or unsavoury directions. They might be leaving magic on the table, but these companies are also potentially building safer, more focused products, and raising questions about whether chatbots really are the future interface for AI or just a passing fad.Loading...Bloomberg OpinionSaveLog inorSubscribeto save articleShareCopy linkCopiedEmailLinkedInTwitterFacebookCopy linkCopiedShare via...Gift this articleSubscribe to gift this articleGift 5 articles to anyone you choose each month when you subscribe.Subscribe nowAlready a subscriber?LoginFollow the topics, people and companies that matter to you.Find out moreRead MoreAIGoogleElon MuskOpenAIReviewOpinionFetching latest articlesWe met a professional shoplifter to understand this crime’s popularityGreg Bearup and Carrie LaFrenzShaken, stirred and a little smoky: three cocktails to define summerThis restaurant is stuck in the past. That’s what makes it greatWant to chair a board? 4 tips from a former Macquarie chairmanSally PattenWhy job hopping might now be the fastest route to a six-figure salaryCurtis Stone’s $4m flop forced him to rethink everythingThis fighting founder discovered inner peace in the boxing ringLife & LeisureAfter a century, Rolls-Royce’s Phantom feels like a million bucksForget the tourist hotspots. Here are 10 ways to go deeper into AsiaGoldman Sachs dealmaker lists $22m Brighton mansion with two poolsSarah Petty‘I was sitting on a bench’: How a chance encounter led to a $5b empireBillionaire Shahin family takes stake in Perth payment fintech Bless