Back to Articles
Parents Warned Over New AI-Generated Bullying Trend

Herald Sun

SKIPPED

Description

Parents have been issued an urgent warning over a rise in AI-generated deepfakes made by students to “embarrass or bully classmates”.

Summary

The article highlights a growing concern about AI-generated deepfakes being used by students to create child abuse material (CAM) aimed at bullying classmates. This issue has been flagged by the Australian Federal Police (AFP) as increasingly problematic due to the realistic nature of these images and the technology's accessibility. Such AI misuse poses novel challenges for law enforcement in differentiating real from fabricated data, thus straining investigative resources. The rise of this damaging trend underscores significant implications for Australian online safety policies and provokes discussions on how to educate children about the responsible use of AI.

Body

Parents have been issued an urgent warning over a rise in AI-generated deepfakes made by students to “embarrass or bully classmates”.Advancements in technology and AI have paved the way for the creation ofhyper-realistic, fake pornographic content, known as deepfakes, which can portray someone doing something that never happened.The AFP has sounded the alarm over a rise in the technology being used to create child abuse material (CAM), with a 48-year-old Victorian man jailed last year after he created more than 790 “realistic child abuse images” using AI.The Australian Federal Police have sounded the alarm over a rise in AI-generated child abuse material. Picture: iStock.The man was charged with one count of producing child abuse material and using a carriage service to transmit child abuse material before he was jailed for 13 months.AFP Commander Helen Schneider said the ability to produce such a large amount of images and data was a real challenge for the AFP, as investigators were left to “analyse and painstakingly sort through a lot of images” in order to bring offenders before the court.“When people are producing mass amounts … it consumes our resources a lot,” Commander Schneider told NewsWire.“Over 790 images – that’s a lot of data.”She said the quality of the AI-generated CAM was becoming increasingly realistic, and made it difficult for the AFP to ensure they weren’t investing resources into investigating images “where there is actually no real child at risk”.She said the AFP wanted to instead focus resources “to identify children and remove them from harm”.Children creating child abuse material with AI to “embarrass or bully classmates”Also of particular concern is the rise in students using the technology.“A lot of young people are using this technology to embarrass or bully classmates, which is a real concern for us,” Commander Schneider said.“I know young people are very digitally literate in today’s world, obviously people in general are very curious about new technology.”However she said that curiosity could open up opportunities to use technology to break the law.A Sydney student allegedly used photos from social media to create deepfake images using artificial intelligence. Picture: iStock.A student from southwestern Sydneyallegedly made deepfake pornography of female students using artificial intelligenceand images sourced from social media, while a student from a school in Victoria’s northwest allegedly created graphic nude images of about 50 girls from the school last June.Fake sexual images of a female teacher were also circulated around another school in Melbourne’s southeast last May.Commander Schneider said the “entry level to use this type of technology was decreasing” and made it “more accessible from a capability perspective”.“AI technology is increasingly accessible and I think it’s more accessible because it’s really integrated into a lot of the platforms used by Australians everyday,” she said.She said young people may be unaware that using AI to create deepfakes – including images, videos or files of a real person – to depict someone under the age of 18 in an abuse situation, was in fact producing CAM.AFP Commander Helen Schneider urged parents to talk openly with their children about the issue. Picture: NewsWire/Tertius PickardIt’s becoming increasingly easier to use AI to create child abuse material, Commander Schneider warned.As the school holidays come to a close, Commander Schneider urged parents, guardians and trusted adults to have “regular, open, non-judgmental” conversations with children about this issue.Research done by the Australian Centre to Counter Child Exploitation in 2022 found only about half of parents talked to their children about online safety.“We need to talk about this technology and understand how it might be misused,” she said.“Make them understand the misuse of this technology to create images of someone that is in an abusive situation … is producing child abuse material.“Whether it’s real or not, it still constitutes an offence under Australian law.”She encouraged parents, guardians and trusted adults to check out the AFP-led education programThinkUKnow, which has free resources to “assist parents and carers navigate these conversations, and information on where to get help if your child is a victim”.Anyone with information about people involved in child abuse have been urged to contact theACCCE, while anyone with information about abuse happening now or children at risk should contact triple-zero.More CoverageFears over growth of ‘degrading’ actClareese Packer‘Abhorrent’: School’s deepfake porn scandalClareese PackerOriginally published asParents warned over rise in AI-generated child abuse material to ‘embarrass’ and ‘bully’ classmates