Back to Articles
The Dark Side of AI-Generated Caricatures

Unknown

ENRICHED

Details

Date Published
17 Feb 2026
Priority Score
2
Australian
Yes
Created
17 Feb 2026, 12:30 pm

Authors (0)

No authors linked

Description

Like many viral trends, the 'cute' fad for AI-generated caricatures has a darker side, raising concerns about privacy and data misuse.

Summary

AI-generated caricatures have gained popularity on social media, but experts highlight significant privacy and data security concerns. Uploading personal information for these depictions raises the risk of identity theft and data misuse, storing potentially sensitive information without clear user control. The article emphasizes the potential for large-scale social engineering attacks if sensitive employer-related data becomes public through AI chatbots. The discussion is particularly relevant for global AI safety policy by underscoring the importance of data protection in the age of sophisticated AI tools.

Body

AdvertisementSocial MediaTechThe dark side of those ‘cute’ AI-generated caricaturesThe New DailyFeb 17, 2026, updated Feb 17, 2026 ShareSource: shannon_elyse26 / TikTok Many social media users will have noticed a flood of AI-generated caricatures on their feeds in recent weeks.You may even have been tempted to create one of the colourful images yourself to replace that soft-focus profile photo taken 10 years ago.The trend invites users to upload a photo of themselves to ChatGPT or another generative-AI tool, accompanied by other personal information such as their job and what it involves, personality features and what they wear to work.Those who use ChatGPT more often can simply ask the tool to create a caricature “based on my job and everything you know about me”.Security analyst Josh Davies noted that 2.6 million of the cartoon-like images had been added to Instagram alone by February 9.“I am currently looking through different posts, and have identified a banker, a water treatment engineer, HR employee, a developer and a doctor in the last five posts I viewed,” Davies said in a blog post.Our own quick search on TikTok and Instagram returned thousands more, including for teachers, chefs, nurses, beauticians, a DJ, truck driver, lawn mower, newsreaders and even a bona fide real-life cartoonist. Most seemed thrilled with the “cute” results.“Haha jumped on this trend!” posted the truckie. “Pretty accurate if you ask me lol.”“AI turned me into a caricature and honestly… it understood the assignment 😏🤣,” wrote a fellow TikTok user shannon_elyse26. “Physical therapist life, but with a little side-eye, confidence, and main-character energy.”But, like many viral trends, this one has a darker side.Experts have warned that by uploading not just photos, but also personal and professional information, users can be left open to data and identity theft or misuse.There is also a risk that publicly posted photos could be copied and  used out of context. @louguillermomateo POV: You ask ChatGPT to create a caricature of you and your job… and it really said: calm, firm, and low-key juggling everything 😂💼 #C#ChatGPTA#AICaricatureW#WorkLifeMultitaskingQueen #AIArt ♬ supernatural – a cappella – Ariana Grande “Every time these trends pop up, people are often quicker to jump on the bandwagon than to question what might actually lie behind them,” global cybersecurity advisor Jake Moore told Forbes.He said chatbots collect all the information, analysing and storing it, and potentially also using it for research and future product development.“While it feels harmless and fun, this behaviour raises serious data privacy concerns and could increase the risk of identity theft in the future,” warned another security expert, Matt Conlon, CEO of Cytidel.“Once that information is uploaded, there’s no guarantee it can be fully removed or controlled.“What starts as entertainment today could become a real-world security issue tomorrow if that data is misused or exposed.”   View this post on Instagram   A post shared by Johnny Alexander Briedis (@johnnyabriedis) Josh Davies told technology website The Register that the AI-generated caricatures can also put people and their employers at risk of sensitive data theft, “social engineering attacks” and AI-related LLM (large language model) account takeovers.“At the time of writing, this is a hypothetical risk,” Davies said. “But given the scale of participants publicly posting this trend, we believe it is highly likely that some could be exploited in this way with the LLM account takeover“The fact that users are posting this personal work information publicly and using a prompt that said ‘based on everything you know about me’, it is feasible that sensitive information related to their employer could be viewable in the prompt history if takeover is successful.”Experts advise users to check the privacy and security settings on any apps they use.It is recommended that people who want to jump on the caricature bandwagon ensure they’re happy for the photo they share to potentially stay online forever, and to be wary of sensitive personal or workplace details it may show. Prompts should also be kept general, to limit the private information being shared.“A typical rule to follow is that if you would not share it publicly, you probably want to avoid including it in an AI prompt,” cybersecurity researcher Oliver Simonnet told Forbes.Topics: AI, Social Media, Technology Share Follow The New DailyAdvertisementMore Social Media >Social MediaDark side of those ‘cute’ AI-drive caricaturesMoviesClip of Pitt-Cruise fight causes Hollywood backlashSocial MediaValentine’s warning as AI fuels ‘insidious’ love scamsSocial MediaThe MAGA billionaires taking over TikTokSocial MediaVile FB pages exploit deaths of teen, shark victimCelebrityWhat Beckham feud reveals about our love of ‘mess’Social MediaEarly picture of social media ban’s biteIndigenousTikTok star sharing Aussie animal stories is a fakeSocial MediaMusk hits back amid rising fury at sexualised pics