Body
Perth teachers have described AI and the rise of deepfakes as an urgent issue confronting today’s teenagers, amid an explosion in the creation of digitally altered intimate images. The rapidly evolving nature of deepfake technology is creating fear and confusion among students, high school teachers have told The West Australian. It’s also requiring schools to regularly update their digital safety lessons and bring in cyber safety experts. It comes after the number of explicit deepfake images of underage Australians being circulated doubled in 18 months, prompting eSafety Commissioner Julie Inman Grant to issue an urgent warning to schools in June. Four out of five reports made to eSafety involved young female victims. Methodist Ladies College senior years deputy principal Katy Moore said parents, students and teachers were concerned about the threats deepfakes posed. “Previously, if a child has posted an intimate image online it can be circulated widely, but they have been part of the journey,” she said. “This is the first time where every single person is a potential victim, so it concerns (students) greatly. “Education is key. It’s about what are our rights and what to do, and creating really safe partnerships with school staff. “It’s a deeply embarrassing situation for a young person to be in, so for them to know there are a number of safe people here to report it to and assist with acting on it.” MLC brought in cyber safety educator Paul Litherland earlier this year to explain how to identify AI images, what to do if students found themselves a victim, and the law around deepfakes. “We then designed a series of well-being lessons to reiterate those messages,” Ms Moore said. The West Australian understands Catholic Education WA has introduced policies relating to AI across its schools. They include advice to parents to model critical thinking for children when it comes to what they are allowed to access. Parents were urged not to upload photos of their children into Al tools — including seemingly innocent apps such as those that make a child’s face appear like a cartoon — as “images can be stored, processed or even used to train external models”. They were warned this posed serious privacy and consent concerns. Director of Information and Learning Technology at Scotch College, Shirin Reyhani, said AI presented “some of the most urgent challenges young people face online today”. “Increasingly sophisticated scams, deepfakes, and forms of cyberbullying are being generated or amplified with AI technologies,” she said. “For example, students may encounter AI-generated phishing emails that appear to come from trusted teachers or school administrators, or see deepfake videos circulating among peers that spread misleading information and rumours. “Group chats can also become platforms for AI-powered harassment, with bots generating hurtful messages or fake images targeted at individuals.” Ms Reyhani said as a result, there had been a significant shift towards teaching students how to critically evaluate AI-generated content and understand algorithmic bias. This included lessons where students analysed AI-generated news articles for bias and accuracy, and learned to question the source and intent behind digital content. Hale School’s Director of Learning Technology Rob Barugh said the school had developed a standalone unit on AI, taught in Year 7, in response to its growing emergence. “AI bias, AI misinformation and deepfake is a big part of that content now,” he said. “This generation is going to grow up dealing with misinformation and deepfake technology so media literacy has become a major part of that. “That’s going to be a real struggle for future generations. It’s a real worry for society. For young and old, it’s leaving people confused, which is the complete opposite endgame for what educators are trying to do.” The growing problem was in part why Hale introduced its policy, revealed in The West Australian this week, requesting students delay giving their children smartphones and social media access until Year 9. Mr Litherland presents cyber safety education to schools and parent groups across the country and described the rise of deepfakes, and a lack of regulation and moderation being applied to the technology, as “a huge concern.” He was aware of cases of sextortion where victims had been threatened with the circulation of deepfake nudes unless they handed over money. “The AI and nudifying issue has the potential to impact on all of us, not just kids. I’m worried about it,” he said. “Sadly it’s the tip of the iceberg.” He urged parents to educate themselves about apps such as Nudify, and trends around sextortion and deepfakes, and ensure their children felt safe to come to them for help, adding: “Sadly many teens will not report such incidents.” Mr Litherland said he was furious that tech giants had not been subject to greater scrutiny, regulations or penalties. “Instead of banning (social media), why don’t we start putting pressure on these networks to start protecting the people who are on their networks,” he said. “These apps have been created and thrown out there with no rules. Who creates an app where you can pump a photo of a kid into it, so you can generate a nude photo of that kid?” In June, Ms Inman Grant wrote to education ministers urging them to ensure schools adhered to laws around image-based abuse after a steady increase in AI-generated abuse. She said at the time: “Anecdotally, we have heard from school leaders and education sector representatives that deepfake incidents are occurring more frequently, particularly as children are easily able to access and misuse nudify apps in school settings. “With just one photo, these apps can nudify the image with the power of AI in seconds. Alarmingly, we have seen these apps used to humiliate, bully and sexually extort children in the school yard and beyond. There have also been reports that some of these images have been traded among school children in exchange for money.” Melesha Sands, Deputy Director General of the WA Department of Education, said with the rise of technology, “harmful practices such as deepfakes are emerging as a growing concern”. “These matters are taken seriously, and the Department has a strong focus on protecting students and supporting schools to respond effectively,” she said. “The Department works collaboratively with the eSafety Commissioner to promote safe use of social media and respond to harmful incidents. “We recently updated our advice on promoting online safety and addressing cyberbullying to include information related to AI-generated materials. Teachers have access to the new eSafety Commission resources and professional learning. “Principals are backed to implement consequences such as recommendations for exclusions or suspensions for students who use social media to threaten or harm others. “The Department will continue to work closely with the eSafety Commissioner to ensure schools have the latest advice, resources, and support to address emerging online risks.”