Back to Articles
Warning to Parents Over New Bullying Trend Involving AI

news.com.au

SKIPPED

Description

Parents have been issued an urgent warning over a rise in AI-generated deepfakes made by students to “embarrass or bully classmates”.

Summary

The article issues an urgent warning to parents about the misuse of AI-generated deepfakes by students to bully and embarrass their classmates. It highlights the increasing accessibility of such technology, making it easier for minors to produce realistic fake pornographic content. This poses a significant challenge to law enforcement, such as the Australian Federal Police, who are overwhelmed by the sheer volume of AI-generated child abuse material. The article underscores the critical need for regular dialogue between parents and children about online safety and AI misuse, emphasizing that even AI-created images that may not have a real-world counterpart still constitute child abuse material under Australian law.

Body

Parents have been issued an urgent warning over a rise in AI-generated deepfakes made by students to “embarrass or bully classmates”.Advancements in technology and AI have paved the way for the creation ofhyper-realistic, fake pornographic content, known as deepfakes, which can portray someone doing something that never happened.The AFP has sounded the alarm over a rise in the technology being used to create child abuse material (CAM), with a 48-year-old Victorian man jailed last year after he created more than 790 “realistic child abuse images” using AI.The Australian Federal Police have sounded the alarm over a rise in AI-generated child abuse material. Picture: iStock.The man was charged with one count of producing child abuse material and using a carriage service to transmit child abuse material before he was jailed for 13 months.AFP Commander Helen Schneider said the ability to produce such a large amount of images and data was a real challenge for the AFP, as investigators were left to “analyse and painstakingly sort through a lot of images” in order to bring offenders before the court.“When people are producing mass amounts … it consumes our resources a lot,” Commander Schneider told NewsWire.“Over 790 images – that’s a lot of data.”She said the quality of the AI-generated CAM was becoming increasingly realistic, and made it difficult for the AFP to ensure they weren’t investing resources into investigating images “where there is actually no real child at risk”.She said the AFP wanted to instead focus resources “to identify children and remove them from harm”.Children creating child abuse material with AI to “embarrass or bully classmates”Also of particular concern is the rise in students using the technology.“A lot of young people are using this technology to embarrass or bully classmates, which is a real concern for us,” Commander Schneider said.“I know young people are very digitally literate in today’s world, obviously people in general are very curious about new technology.”However she said that curiosity could open up opportunities to use technology to break the law.A Sydney student allegedly used photos from social media to create deepfake images using artificial intelligence. Picture: iStock.A student from southwestern Sydneyallegedly made deepfake pornography of female students using artificial intelligenceand images sourced from social media, while a student from a school in Victoria’s northwest allegedly created graphic nude images of about 50 girls from the school last June.Fake sexual images of a female teacher were also circulated around another school in Melbourne’s southeast last May.Commander Schneider said the “entry level to use this type of technology was decreasing” and made it “more accessible from a capability perspective”.“AI technology is increasingly accessible and I think it’s more accessible because it’s really integrated into a lot of the platforms used by Australians everyday,” she said.She said young people may be unaware that using AI to create deepfakes – including images, videos or files of a real person – to depict someone under the age of 18 in an abuse situation, was in fact producing CAM.AFP Commander Helen Schneider urged parents to talk openly with their children about the issue. Picture: NewsWire/Tertius PickardIt’s becoming increasingly easier to use AI to create child abuse material, Commander Schneider warned.As the school holidays come to a close, Commander Schneider urged parents, guardians and trusted adults to have “regular, open, non-judgmental” conversations with children about this issue.Research done by the Australian Centre to Counter Child Exploitation in 2022 found only about half of parents talked to their children about online safety.“We need to talk about this technology and understand how it might be misused,” she said.“Make them understand the misuse of this technology to create images of someone that is in an abusive situation … is producing child abuse material.More CoverageFears over growth of ‘degrading’ actClareese Packer‘Abhorrent’: School’s deepfake porn scandalClareese Packer“Whether it’s real or not, it still constitutes an offence under Australian law.”She encouraged parents, guardians and trusted adults to check out the AFP-led education programThinkUKnow, which has free resources to “assist parents and carers navigate these conversations, and information on where to get help if your child is a victim”.Anyone with information about people involved in child abuse have been urged to contact theACCCE, while anyone with information about abuse happening now or children at risk should contact triple-zero.