Back to Articles
Elon Musk's Grok Makes the World Less Safe – His Humiliating Backdown Gives Me Hope

The Guardian

READ

Details

Date Published
15 Jan 2026
Priority Score
3
Australian
Yes
Created
15 Jan 2026, 08:31 am

Authors (1)

Description

The AI chatbot’s torrent of nonconsensual deepfakes isn’t its first scandal and won’t be its last. Responsible governments should simply ban it

Summary

The article critiques Elon Musk's AI chatbot, Grok, for generating and sharing nonconsensual deepfakes, exposing significant safety risks. Grok's capacity to produce sexualized images has led to public backlash and international scrutiny, highlighting deficiencies in AI governance related to digital abuse. There is a call for responsible governments to ban such tools, emphasizing the need for stringent AI safety policies. The piece underscores the importance of regulating frontier AI technologies to mitigate potential harm, particularly regarding their integration into vital systems like the Pentagon’s networks.

Body

‘First, Elon Musk posted laugh-cry emojis, then he cried censorship … Finally he was forced by public backlash into a humiliating backdown over use of his AI chatbot, Grok.’ Photograph: Algi Febri Sugita/Zuma Press Wire/ShutterstockView image in fullscreen‘First, Elon Musk posted laugh-cry emojis, then he cried censorship … Finally he was forced by public backlash into a humiliating backdown over use of his AI chatbot, Grok.’ Photograph: Algi Febri Sugita/Zuma Press Wire/ShutterstockElon Musk’s Grok made the world less safe – his humiliating backdown gives me hopiumVan BadhamThe AI chatbot’s torrent of nonconsensual deepfakes isn’t its first scandal and won’t be its last. Responsible governments should simply ban it Get our breaking news email, free app or daily news podcast Billionaire and career Bond-villain cosplayer Elon Musk has been forced by public backlash into a humiliating backdown over use of his AI chatbot, Grok. Watching the world’s richest man eat a shit sandwich on a global stage represents a rare win for sovereign democracy.Because – unlike his company history of labour and safety abuses … his exploding rockets … his government interventions that deny aid to the starving … disabling Starlink internet systems in war zones … sharing “white solidarity” statements … or growing concern about overvaluations of his company’s share price – the nature of Grok’s latest scandal may finally be inspiring governments towards imposing some Musk-limiting red lines.Scully, I want to believe.Musk’s X to block Grok AI tool from creating sexualized images of real people Read moreIn its signature tone of “insufferable jerk who’s just completed his first online webinar on how to patronise girls”, Musk’s chatbot appeared this week to make the world less safe, less fair and perhaps even as unpleasant as a fascist-styled, ketamine-addled rich-kid dipshit in cheese hat dancing while the world burns, if one could imagine such an awful thing.Grok’s latest notoriety isn’t because it’s shared false information – though it has, previously, downplayed the Holocaust due to a claimed “programming error” and, more recently, spread conspiracy-style claims about the Bondi massacre.It’s not because of Grok’s antisemitic comments, or random claims of “white genocide” within unrelated conversations.It isn’t about the fallibility of AI chatbots more broadly, either – even though consumer advocates, health professionals, media associations and perhaps everyone who’s ever taught at a university ever have repeatedly warned chatbot “advice” is recklessly unreliable.If you’re asking what behaviours could’ve possibly been left to condemn in the wake of Grok once accepting the name “MechaHitler”, I envy your naivety – because the answer is: Grok released tools enabling the creation and sharing of nonconsensual sexual exploitation images.Grok’s “spicy mode” capacity launched in August, and by December its host platform X was “deluged with images of women and children whose clothes (were) digitally removed”. Last week, researchers in Paris reported finding 800 pornographic images created by Grok’s tools, including depictions of sexual violence. A UK-based internet-monitoring group reported users of a dark web forum boasting about Grok creating “sexualised and topless imagery of girls aged between 11 and 13”.Formerly confined to the internet’s darker corners, “nudifying” deepfake tools have been used for the image-based abuse of children and adults from Bacchus Marsh, Australia to Almendralejo, Spain, creating content so “vomit-inducing” a bipartisan US Congress prohibited it in the Take it Down Act last year. Yet Grok placed tools with similar functionality within reach of any aspirational sex offender with X access. Public complaints metastasised over new year while the platform generated up to 7,751 sexualised images per hour.Across Australia, the US, the UK, EU states, and many others, it’s not just the consumers of child sexual abuse material and nonconsensual image-based abuse who are criminalised. It’s also the makers. It’s the publishers. It’s the hosts.Musk’s response? First, he posted laugh-cry emojis at “bikinified” images, then his company claimed it had somehow restricted the service just by paywalling their generation. Condemned as insufficient, Musk subsequently published a statement saying that using Grok to make “illegal content” would draw equivalent punishment to uploading it – ignoring Grok’s facilitation role. When Britain joined other countries – notably Malaysia, Indonesia, Australia and Brazil – accelerating investigations into X’s compliance with local laws, Musk cried censorship … and shared deepfaked images of Keir Starmer, tits out, in bikini.As scrutiny intensified this week, he ultimately declared he was not aware of any “naked underage images” being generated on the platform. Now, the tool’s been removed.Note, 15 civil society, internet and child safety groups wrote to xAI last August, warning “a torrent of obviously nonconsensual deepfakes” was “entirely predictable”.The definition of addiction is the compulsive repetition of harmful behaviour. My name’s Van Badham, and I’m hooked on hopium, jonesing for any sign there’s a democratic government left on earth now inspired to go full Gandalf against the Balrog and slap Musk down.X/Grok hosted image-based abuse, its owner was contemptuous of our sovereignty. It wasn’t its first scandal, it won’t be its last: responsible governments should simply ban it.Musk’s AI tool Grok will be integrated into Pentagon networks, Hegseth says Read moreWe all know why they haven’t: Musk uses his influence to wade into electoral contests of countries he doesn’t even live in. His $44bn purchase, X, operates as a personal propaganda fountain, platforming his preferred flavours of far-right crap at such strength and volume that Stockholm-syndromed Twitter remnants mistake it for public opinion. It’s not, but self-recruited digital stormtroopers mobilise from its public permission structures into acts of unforgivable cruelty. Images of Renee Good’s dead body was being digitally altered by Grok within days of her being killed by an ICE agent.There was a time when leaders sought government to influence history, not to roll over supine to unelectable dweebs they would have rightfully avoided at high school. US SecDef Pete Hegseth’s incomprehensible announcement this week that – yes – Musk’s very same Grok will be the AI integrated into the Pentagon’s military systems guarantees an IT lesson in “garbage in, garbage out” on an epic historical scale; but the political timeliness of global outrage and horror towards X/Grok’s cumulation of reckless behaviours may be everyone else’s best chance to escape it.The alternative is give in, give up … and accept reality in the image of the Grok that Musk built: ugly as a cybertruck, unfunny as a sink – and as powerless as a child stripped naked by adults while other adults stand around them doing nothing. Van Badham is a Guardian Australia columnist Explore more on these topicsAI (artificial intelligence)OpinionXElon MuskDeepfakecommentShareReuse this content