ChatGPT's 'Creepy' New Viral Trend
News.com.au
ENRICHED
Details
- Date Published
- 9 Feb 2026
- Priority Score
- 2
- Australian
- Yes
- Created
- 9 Feb 2026, 06:15 pm
Authors (0)
No authors linked
Description
There’s a new AI trend that’s making us wonder if it knows us a little too well.
Summary
The article explores a new viral trend involving ChatGPT where users ask the AI to create caricatures based on personal data it already holds. This trend highlights concerns about privacy, as it showcases the AI's ability to leverage user interactions to create eerily accurate depictions. The piece underscores the potential risks of data management and privacy in AI systems, emphasizing the users' lack of control over personal information once uploaded. Although the trend appears innocuous, it raises significant questions about the extent of AI knowledge and privacy implications, contributing to broader discussions on AI governance and safety.
Body
ChatGPT’s new viral trend proves how well it knows youHow well does your Chatbot know you? ChatGPT is spooking users with its insight in a new viral trend sweeping Twitter.Erin Christieless than 2 min readFebruary 9, 2026 - 10:21PMThere’s a new AI trend that’s making us wonder if it knows us a little too well.Worldwide, people have taken to asking the AI platform, ChatGPT, to create a caricature of them based on what it already knows about them.This information will usually come from their chat history, but the accuracy can be pretty astounding.Users on X have been sharing their caricatures, pointing to how well they’ve been captured by the AI tool.Many will have things in the background that round out the picture with extra detail: books with uncanny titles, screens with graphs or editing software depicted on them, flags and even fantasy football stats can be seen in the background of the caricature images.While similar images can be made in apps like Cartoonify, ChatGPT offers the twist of proving how well it “knows” you based on what you’ve previously asked or searched on the platform.This teacher asked ChatGPT to make a caricature. Picture: X/@RyanKennedy_22A ChatGPT version of a medical student. Picture: X/@annie_ameh_However, the excitement around the trend feels a little paradoxical, given the concern many hold that AI “knows too much”.If it knows us better than we know ourselves, what does that say for our privacy, and what AI platforms choose to do with the data we provide to them?While it seems like a harmless trend, AI platforms aren’t bound by confidentiality agreements the way doctors, lawyers, and psychologists are.London, UK – 05 03 2025: Apple iPhone screen with Artificial Intelligence icons internet AI app application ChatGPT, DeepSeek, Gemini, Copilot, Grok, Claude, etc.More CoverageYear AI predicts it will overtake humanityShane Galvin – New York Post‘I would kill someone to exist’: AI’s chilling threatJared LynchDavid Grover, Senior Director of Cyber Initiatives at Baylor University, told KWTX that once you upload information, you lose control of it.“In general, all of them are going to save your information and that information is going to go into some kind of storage and at that point, you don’t really know what that company is doing with your information,” he explained.“You need to be careful about any image that you put in and anything that you put online because that becomes who you are and the more we get into the digital world, the more challenging it’s going to become to protect.”More related storiesDesignCompany unveils plans for world’s largest flying carA Chinese company has unveiled plans for the world’s largest flying car which can carry up to 10 people.Read moreSpaceAussie Defence recruits for space rolesA challenging strategic environment has the Australian Defence Force looking to “new frontiers”.Read moreMilitary‘Completely insane’: Trump warning for AusAllies are asking uncomfortable questions about American reliability as the US president’s behaviour sparks security concerns.Read more