Back to Articles
X to ban users from earning revenue if they post unlabelled AI-generated war videos

The Guardian

ENRICHED

Details

Date Published
4 Mar 2026
Priority Score
3
Australian
Unknown
Created
4 Mar 2026, 04:00 pm

Authors (1)

Description

Social media feeds have been flooded with fake battle scenes since start of Iran conflict

Summary

This article reports on X's new policy to suspend users from earning revenue for 90 days if they repeatedly post unlabelled AI-generated war videos, with permanent bans for subsequent violations. This move comes in response to the proliferation of fake battle scenes and misinformation during the Iran conflict. It highlights concerns about the ease with which AI can generate misleading content and the impact on public access to authentic information during critical events, touching upon the broader challenge of AI-driven disinformation and its societal implications.

Body

X said: ‘During times of war, it is critical that people have access to authentic information.’ Photograph: Étienne Laurent/EPAView image in fullscreenX said: ‘During times of war, it is critical that people have access to authentic information.’ Photograph: Étienne Laurent/EPAX to ban users from earning revenue if they post unlabelled AI-generated war videosSocial media feeds have been flooded with fake battle scenes since start of Iran conflictElon Musk’s X will ban users from making money on the platform if they repeatedly post unlabelled AI-generated war videos, after social media feeds were flooded with fake battle scenes from the Iran conflict.The social media platform, which has about half a billion monthly active users, will suspend people from earning revenue from posts for 90 days if they put up AI-generated videos of an armed conflict without adding a disclosure that it was made with AI. A second infraction wouldlead to a permanent ban, it said on Tuesday night, after the first days of the conflict in Iran were marked by a torrent of bogus online footage.Timelines on X, as well as Instagram and Facebook, which are run by Meta, have carried numerous faked battle scenes, including Iranian rockets pursuing and shooting down a US jet – which was viewed 70m times, according to checks by BBC Verify – and another clip that used AI to replace smoke rising from the site of a real missile strike with a fake fireball several times bigger.Users can make hundreds of dollars a month on X as part of the platform’s advertising model if they build substantial followings approaching 100,000 people, which incentivises the production of shocking viral posts.Nikita Bier, the head of product at X, said: “During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people. Starting now, users who post AI-generated videos of an armed conflict – without adding a disclosure that it was made with AI – will be suspended from creator revenue sharing for 90 days. Subsequent violations will result in a permanent suspension from the programme.”Other fake videos of the war have achieved huge reach. A clip circulating on Instagram purporting to show a huge conflagration after “Iran destroyed the US airbase in Riyadh” was fake and has been identified as 18-month-old footage of the aftermath of an Israeli strike on an oil refinery in Hodeidah in Yemen.I’m on the Meta Oversight Board. We need AI protections now | Suzanne NosselRead moreFull Fact, the UK factchecking organisation, said it is “increasingly seeing AI turbocharge the spread of misinformation on social media”.Steve Nowottny, Full Fact’s editor, said: “In the last few days we’ve seen lots of examples of AI images shared across different social media platforms as if they are real, including fake pictures of an aircraft carrier and the Burj Khalifa on fire, and an image supposedly showing the body of Ayatollah Khamenei.“Even when AI images seem low quality, or still have a visible watermark on them, we often see them shared at scale – and the sheer volume of this fake content and the ease with which it is generated and spreads is a real concern.”Sam Stockwell, who researches AI in online information at the UK’s Centre for Emerging Technology and Security, said there appeared to be a new trend of users asking AI chatbots to verify whether videos were AI fakes.“Unfortunately chatbots are not very good at assessing real-time events,” he said.That does not, however, stop people posting the chatbot’s incorrect assessments as evidence something is real. “People are trying to manipulate AI outputs to support their narrative and arguments about the war,” he said.Meta has been approached for comment.Explore more on these topicsXSocial mediaAI (artificial intelligence)Digital mediaInternetnewsShareReuse this content