Deepfake Technology Could End Work From Home Privileges
7NEWS
SKIPPED
Details
- Date Published
- 29 Oct 2024
- Priority Score
- 3
- Australian
- Yes
- Created
- 8 Mar 2025, 02:41 pm
Description
Getting to work from home a few days a week has become non-negotiable for many Aussie employees. But a new website — and the tech it’s created could spell the end of the perk forever.
Summary
The article highlights the growing threat of deepfake technology disrupting work-from-home arrangements by enabling fraudulent video conferencing. As deepfakes become harder to detect, the integrity of virtual business interactions is at risk, potentially undermining remote work. The piece discusses the broader implications of synthetic media, including the risk of cybercrime in business settings and the challenges in distinguishing real from fake content. Although defensive tools like deepfake detection software exist, their effectiveness is limited, raising pressing concerns about AI's role in both creating and mitigating fraudulent content.
Body
Deepfake might’ve seemed funny at first, when the joke was on Tom Cruise. But now the explosion of synthetic content and ‘fauxtography’ is rapidly changing how we interact, in our increasingly online lives. Seeing is no longer believing. The US election race is rife with digitally manipulated images and the Queensland election also grappled with the same issue. There are countless deepfakes of Harris and Trump doing the rounds on Reddit and Twitter — all stating they are 100 percent genuine. Experts say video conferencing could be the new frontier for criminals, to scam businesses who are reliant on technology to do their daily trade. The prospect of deepfake Zoom calls, where a person believes they are trading commercially sensitive information with their boss but are in fact talking to an online clone controlled by a criminal — could tank work from home for good. A report by the World Economic Forum found 66 per cent of cybersecurity professionals experienced deepfake attacks in 2022. While 38 percent of large companies have been attacked by deepfake fraud. Some of the synthetic content is extremely sophisticated, and it’s getting harder to decipher fact from fiction with the sheer volume of content coming at us. If you use social media, you are exposed to more than 3.2 billion images and over 700 thousand hours of video – a day. There are ways to spot the fakes. Synthetic videos have their own oddities, like slight mismatches between sound and motion and distorted mouths. They often lack facial expressions or subtle body movements that real people make. The absence of blinking could be a sign that the video is computer generated. However, updated versions of software like Midjourney are rapidly ironing these kinks out. For those who don’t feel like being duped, there is software — for deepfake detection and deepfake watermarking — but it comes with its own limitations and can’t always be relied on. Image verification tools like the REVEAL Image Verification Assistant are also handy for detecting mis/disinformation, but you need to be savvy to interpret the results. Plus, it begs the question - should we be using AI tools to fight AI fakes? The best defence you have is yourself. Think first and ask simple questions to determine if something is fake. Quite often your own conclusions will be enough to determine what’s real and what’s not. Shaun says… If you are going to experiment with generative AI, here are a few things to keep in mind.