Fake Minns, Altered Images and Psyop Theories: Bondi Attack Misinformation Shows AI’s Power to Confuse
The Guardian
SKIPPED
Details
- Date Published
- 18 Dec 2025
- Priority Score
- 3
- Australian
- Yes
- Created
- 18 Dec 2025, 08:30 am
Description
For now many fakes are easy to spot. But audiences could find it increasingly difficult to tell fact from fiction as tech improves
Summary
The article highlights the significant role of AI in spreading misinformation during the aftermath of the Bondi Beach terror attack. It underscores the challenges in distinguishing real from fake content as AI technologies like generative AI contribute to the propagation of deepfakes and manipulated imagery. This has serious implications for AI governance and the global effort to combat AI-fueled misinformation, particularly in light of weakened fact-checking mechanisms on platforms like X (formerly Twitter). The issues discussed are highly relevant to AI safety policy in Australia and reflect broader global concerns regarding the misuse of AI for creating and distributing false information.
Body
Mourners in Sydney pay tribute to the victims of the Bondi beach shooting attack. Photograph: David Gray/AFP/Getty ImagesView image in fullscreenMourners in Sydney pay tribute to the victims of the Bondi beach shooting attack. Photograph: David Gray/AFP/Getty ImagesAnalysisFake Minns, altered images and psyop theories: Bondi attack misinformation shows AI’s power to confuseJosh Taylor For now many fakes are easy to spot. But audiences could find it increasingly difficult to tell fact from fiction as tech improves
Full report: Bondi terror attack
Victims of Bondi beach shooting
Misinformation, turbocharged by AI, was hard to avoid in the hours and days that followed the Bondi beach terror attack, as some platforms pushed dubious claims to users trying to find factual information.The X “for you” page, which serves up content determined by an algorithm, was filled with false details, including: that the attack that left 15 people dead was a psyop or false-flag operation; that those behind the attack were IDF soldiers; that those injured were crisis actors; that an innocent person was one of the alleged attackers; and that the Syrian Muslim hero who fought the attackers was a Christian with an English name.Generative AI only made matters worse.Pakistani man living in Australia describes ‘nightmare’ of wrongly being labelled as Bondi attackerRead moreAn altered clip of the New South Wales premier, Chris Minns, with deepfaked audio making false claims about the attackers, was shared across multiple accounts.In another particularly egregious example, an AI-generated image based on an actual photo of the victims was altered to suggest he was a crisis actor having red makeup applied to his face to look like blood.“I saw these images as I was being prepped to go into surgery today and will not dignify this sick campaign of lies and hate with a response,” the man depicted in the fake image, human rights lawyer Arsen Ostrovsky, later posted on X.Pakistan’s information minister, Attaullah Tarar, said his country had been the victim of a coordinated online disinformation campaign in the wake of the Bondi beach terror attack, with false claims circulating that one of the suspects was a Pakistani national.The man who was falsely identified told Guardian Australia it was “extremely disturbing” and traumatising to have his photo circulated on social media next to claims he was the alleged attacker.Tarar said the Pakistani man was “a victim of a malicious and organised campaign” and alleged the disinformation campaign originated in India.Meanwhile X’s AI chatbot Grok told users that an IT worker with an English name was the hero who tackled and disarmed one of the alleged shooters, rather than the Syrian-born Ahmed al-Ahmed. This claim seems to have originated on a website that was set up on the same day as the terror attack to mimic a legitimate news site.AI-generated images of Ahmed also proliferated on social media, promoting crypto schemes and fake fundraisers.It was a far cry from Twitter’s heyday as a hub for breaking news. Misinformation was floating around back then too, but it was less common and it wasn’t served up via an algorithm designed to reward engagement based on outrage (particularly for verified accounts that stand to financially benefit from that engagement).4:12Bondi beach terror attack: how Australia's gun laws have eroded – video Many of the posts touting false claims had hundreds of thousands or even millions of views.Legitimate news was circulating on X, but it was buried under misinformation turbocharged by AI.When Musk took over X, he dismantled the site’s factchecking scheme in favour of a user rating system called “community notes”, which appends crowdsourced user factchecking to posts. Other platforms are following suit. Meta has dismantled its previous factchecking system in favour of implementing its own version of community notes.But, as the QUT lecturer Timothy Graham said this week, the community notes system isn’t helpful in situations where opinions are deeply divided. It takes too long. Community notes have since been applied to many of the above examples, but it happened long after most people would have seen the original posts in their feeds.X is trialling having Grok generate its own community notes to factcheck posts, but if the Ahmed example is anything to go by this is even more worrying.The company did not respond to questions about what it is doing to tackle misinformation posted on its platform, or propagated by its AI chatbot.A saving grace is that many of the fakes are still easily spotted – for now. The fake Minns, for example, had a American twinge in the accent, making it obvious it wasn’t him. The crisis actor post had many of the hallmarks of dodgy AI image generation, such as incorrectly generated text on a T-shirt.Albanese’s signature cautious approach has come up short in the wake of the Bondi beach attack | Tom McIlroyRead moreFor the most part, media outlets ignored the posts or called them out.But as AI models improve that could change, making it even harder to distinguish fact from fiction. Meanwhile, AI companies and the platforms hosting their content appear indifferent to doing anything to prevent it.Digi, the industry group representing the social media platforms in Australia, proposed dropping a requirement to tackle misinformation from an industry code earlier this year, saying “recent experience” demonstrated “misinformation is a politically charged and contentious issue within the Australian community”.It’s hard to see how this week will change things.
Josh Taylor is a technology reporter for Guardian Australia
Explore more on these topicsBondi beach terror attackArtificial intelligence (AI)analysisShareReuse this content