Back to Articles
Facebook, Instagram still showing ads for deepfake AI nude apps
Crikey
SKIPPED
Details
- Date Published
- 10 July 2024
- Priority Score
- 0
- Australian
- No
- Created
- 8 Mar 2025, 02:41 pm
Authors (1)
- Cam WilsonENRICHED
Description
Facebook and Instagram continue to show ads for apps that generate sexual imagery of people without their consent, despite such apps being against the platforms' own rules.
Summary
Error processing article with AI.
Body
Facebook and Instagram advertisements promoting AI apps for generating sexual “deepfake” images of people without their consent are still being shown to Australians, despite the issue having been repeatedly raised with Meta.As the Senate is set to consider a law thatcriminalises the creation of digitally created sexually explicit contentmade without an individual’s consent, Meta, which owns Facebook, Instagram, WhatsApp and Threads, is making money from advertisements for apps designed to create exactly that.Meta’s Ad Library contains several examples of advertisers promoting different applications with messages and graphics that explicitly suggest using their applications to generate new images of people “without clothes” or to face-swap someone to create “NSFW [not safe for work] pics”.Related Article Block PlaceholderArticle ID: 1163365The eSafety commissioner is pushing back on Albo and Dutton’s children social media ban planCam WilsonIn some cases, advertisers have taken steps to avoid detection. In one case, the advertisement only mentioned that the app could “remove clothes from photos” in the image, whereas its caption promoted the application as a “writing assistant”. These ads could still be found by searching obvious search terms.These advertisements, which originate from overseas but are targeted to Australian users, send people to Apple iOS App Store listings that do not obviously promote the applications’ use for non-consensual sexual content creation. (This is likely an attempt to avoid being removed from the App Store which does notpermit these sorts of apps).Crikeyhas chosen not to name the applications.While the Ad Library does not give specific information about which Australian users were shown these advertisements, Meta does disclose that they were seen by European Union users as young as 18.Meta’s policies do not allow posts or advertisements that solicit sexual content (AI generated or not), or content that promotes non-consensual sexual imagery. And yet the company has repeatedly allowed its advertising platform to promote these applications to users.In April,404 Mediareported on Meta allowing advertisements for these exact types of applications. The outlet sent a few examples to Meta, which removed them.This month there were dozens of similar advertisements that had run over the past few months. In a handful of cases, the exact same advertisements, using the same graphics and promoting the same applications, appeared from different Facebook accounts.WhenCrikeysent through one example, it was removed by Meta. Other examples seen byCrikeybut not shown to Meta remain online — even those using the exact same images and language as the removed ones.Crikeydeclined a request from Meta to send through further examples. The company failed to remove advertisements monitored byCrikey,demonstrating how the US$1.3 trillion tech company is still failing to successfully moderate its advertising platform.A previous investigation byCrikeyshowed that Meta is also showing advertisements selling drugs, guns and even a monkey to Australian users.Last month, a 17-year-old boy was warned by Victoria Police afterfake sexual images of 50 schoolgirls were circulated online. In response, the eSafety commissioner Julie Inman Grantsaidthat deepfake images are “commonly” referred to her office and that there are now thousands of these applications available.Meta declined to provide comment.