Condemnation of Elon Musk’s AI Chatbot Reached Tipping Point After French Raid, Australia’s eSafety Chief Says
The Guardian
ENRICHED
Details
- Date Published
- 4 Feb 2026
- Priority Score
- 4
- Australian
- Yes
- Created
- 4 Feb 2026, 04:45 pm
Description
A number of countries including Australia are investigating X over Grok-produced sexualised deepfakes
Summary
The article outlines the international scrutiny and regulatory actions against Elon Musk's company, X, following its AI chatbot Grok's involvement in producing sexualized deepfakes. This marks a significant alignment among global regulators, including Australia's eSafety Commissioner, who has emphasized the inadequacy in tech companies' measures against child exploitation and abuse. Important advancements in regulatory cooperation are highlighted, showing a concerted global effort to address potentially catastrophic misuse of AI technologies. The article is relevant for its insights into the development of AI safety governance, particularly around safeguarding against AI-generated harmful content.
Body
Australia’s eSafety commissioner has welcomed the global regulatory focus on Elon Musk’s X after this week’s raid in France. Photograph: Gian Ehrenzeller/EPAView image in fullscreenAustralia’s eSafety commissioner has welcomed the global regulatory focus on Elon Musk’s X after this week’s raid in France. Photograph: Gian Ehrenzeller/EPACondemnation of Elon Musk’s AI chatbot reached ‘tipping point’ after French raid, Australia’s eSafety chief saysA number of countries including Australia are investigating X over Grok-produced sexualised deepfakes
Get our breaking news email, free app or daily news podcast
The eSafety commissioner, Julie Inman Grant, says global regulatory focus on Elon Musk’s X has reached a “tipping point” after a raid of the company’s offices in France this week.The raid on Tuesday was part of an investigation that included alleged offences of complicity in the possession and organised distribution of child abuse images, violation of image rights through sexualised deepfakes, and denial of crimes against humanity.A number of other countries – including the UK and Australia – and the EU have launched investigations in the past few weeks into X after its AI chatbot, Grok, was used to mass-produce sexualised images of women and children in response to user requests.French headquarters of Elon Musk’s X raided by Paris cybercrime unit Read moreInman Grant told Guardian Australia: “It’s nice to no longer be a soloist, and be part of a choir.“We’ve been having so many productive discussions with other regulators around the globe and researchers that are doing important work in this space,” she said.“I think this really represents a tipping point. This is global condemnation of carelessly developed technology that could be generating child sexual abuse material and non-consensual, sexual imagery at scale.”After the outcry, X turned off Grok image-generation for all but paid users, and vowed to make changes to prevent users from declothing real people.Sign up: AU Breaking News emailThe moves against X came ahead of the eSafety commissioner’s latest report, released on Thursday, which examines how tech platforms are preventing child sexual abuse and exploitation on their platforms.Notices were issued to Apple, Discord, Google, Meta, Microsoft, Skype and WhatsApp in July 2024 that required six-monthly updates from the platforms. The Microsoft-owned Skype no longer exists.Inman Grant said there had been some improvements from the platforms, including detection of known child abuse material and prevention of livestreaming of abuse outside messaging apps, but the platforms still fell short.Apple, which Inman Grant said had previously viewed privacy and safety as being mutually exclusive, had come the farthest.“[Apple is] really putting an investment … and engaging and developing their communication safety features and evolving those.”In 2024 the company began rolling out features to allow children to report nude images and video being sent to them directly to Apple, which could then report the messages to police.But Inman Grant said there was still inadequate detection on FaceTime for live child abuse or exploitation. She levelled similar criticisms at Meta for Messenger, Google Meet, Snapchat, Microsoft Teams, WhatsApp and Discord.A number of the services were not using language analysis to proactively detect sexual extortion, she said.“It’s surprising to me that they’re not attending to the services where the most egregious and devastating harms are happening to kids. It’s like they’re not totally weatherproofing the entire house. They’re putting up spackle on the walls and maybe taping the windows, but not fixing the roof.“It’s interesting to me to see how patchy their deployment of these safety technologies are.”Improvements included: Microsoft detecting known child abuse material on OneDrive and in email attachments in Outlook; Snap reducing the time to process reports of child abuse material from 90 minutes to 11 minutes; and Google launching sensitive content warnings that blur images of nudity before viewing.The companies will be required to report to eSafety two more times – in March and August this year. Inman Grant said the transparency reports had opened the “black box” on what the companies were doing and would help with future investigations.X was not included in the notices, and challenged eSafety’s issuing of a similar notice in March 2024 in a case that is still ongoing.Explore more on these topicsXInternetSocial mediaSocial media bannewsShareReuse this content