Back to Articles
The Next Frontier for Scams as Criminals Weaponize Artificial Intelligence to Steal Billions from Australians

The West Australian

SKIPPED

Description

Criminals are increasingly using AI to enact sophisticated scams that experts say will change the way we operate online.

Summary

The article examines the evolving threat of AI-enabled scams targeting Australians, highlighting a new era of sophisticated fraud that leverages deepfake technology and massive data breaches. It underscores the potential for AI to enhance the efficacy of scams by creating believable deceptions using personal data, contributing to a growing issue of financial loss. The piece emphasizes the need for cooperative efforts between businesses, government, and citizens to mitigate these risks, aiming to build a more resilient defense against such criminal activities. Despite the seriousness of the threat, the article notes proactive steps being taken by financial and governmental institutions to address and prepare for this new frontier of scams.

Body

Kellie Gerardi is a US astronaut, bioastronautics researcher and author. This week, she was very nearly the victim of a sophisticated sextortion scam. “The message came to my personal email from a throwaway account,” she wrote to her almost one million Instagram followers. “It contained my name, my home address, and a photo of my house, along with a PDF attachment called ‘KellieGerardi.pdf.” She opened the attachment (a move she calls “debatably stupid”) and there was a grainy image taken from outside her bedroom window. The message attached was menacing. An anonymous person claimed to have intimate footage of Gerardi in her own home — they wrote out her address — and claimed to have access to all the data on her personal mobile phone. They said the footage they had would be released publicly if she didn’t pay them $US20,000 or if she told anyone about the threats. Gerardi, who has been trained to operate in complex, high-stress situations when the stakes are life and death in space, says she was “shaking with fear”. “I panicked, shut my phone and went on a different device to research and call someone for advice,” she wrote. “I learned that this is a common scam but with a new, sophisticated twist: mass US data breaches have blanketed the dark web with our personal info, which the scammers use to create threats with increasing believability: in this case collecting Google Maps screenshots of homes with realistic angles. “I know it’s a scam. But I am still rattled.” Scams are nothing new, but with so much of our life online, criminals are increasingly finding new and more sophisticated ways to target potential victims — and anyone can be a target. With the rise of artificial intelligence, there is a concern we are about to enter a new era of sophisticated fraud that will be nearly impossible for the regular person to quickly detect. Imagine receiving a video call from a loved one asking for money. But it’s not them: it is an artificially generated deepfake. Their voice, their face. A criminal behind it all. Or perhaps it’s not your child — maybe it’s your boss. In February, Hong Kong police reported a local finance worker had been duped into transferring $US25 million of company money to scammers. The worker had initially received an email from his company’s UK-based chief financial officer detailing how a “secret” transaction needed to take place. He was understandably suspicious. Those suspicions were dismissed when he took part in a multi-person video call with several senior people in the company whom he recognised. He transferred the funds. The problem? None of the people on the call were real. They were all deepfakes of real people. Deepfakes and text messages might sound scary. But there is another reason law enforcement across the globe is bracing for an onslaught of AI-enabled scams: data. AI has incredible data-crunching abilities, making it easier than ever for criminals not only to contact huge numbers of people with phishing emails and text messages but to identify targets using a terrifyingly large amount of personal information — like what the outside of your bedroom window looks like. Australia is the 15th-most data-compromised country in the world. The MediSecure ransomware attack that took place earlier this year affected 12.9 million Australians alone. And that’s just one data breach. The first quarter of 2024 saw a 388 per cent rise in data breaches, up from the final quarter of 2023. “They (criminals) are using AI to analyse a lot of data,” Audrey Pajmon, the executive manager of fraud services at Bankwest, explained. “They use AI to analyse that data to identify a potential target. “The other thing they use AI for is to just be more efficient. At the moment, you’ve got people still picking up phone calls, etc. But with AI you can just multiply efficiency in terms of attacks, and then hit more people with the phishing emails and the texts, as well as the social engineering.” You can’t even talk to a real person when you’re being scammed anymore. “At Bankwest we have not seen a customer fall victim to AI yet,” Ms Pajmon said. “But we know it’s coming.” Ms Pajmon said Bankwest was watching the rise of criminal use of AI closely and was aware scammers were tricking people by posing as well-known and largely trusted companies — Telstra, Coles, or your bank, for example. “We know it’s coming, we know it’s a threat,” she said. “They will use the ability for video cloning and voice cloning to impersonate a family member, a work colleague or a business partner. That’s what the next generation of attacks will look like.” So you get a video call from your mother. Your child. Your best friend. They are in distress. They beg you to send them money. Most people would not hesitate. Who is going to ask their own mother “how do I know you are who you say you are?” How is the average person supposed to protect themselves from this? “The same rules still apply,” Ms Pajmon said, meaning the same due diligence you should be doing now when you receive a phone call, text message or email will soon have to apply even when you think you know the person on the other end of the line. “You have to stop, pause, and check who you are dealing with.” Australians are sadly susceptible to scams. Last year, the Australian Competition and Consumer Commission revealed it had received 601,000 reports of scams, representing financial losses of $2.7 billion. “My view is we are very respectful, compliant, and we want to do the right thing as a nation overall. So when we get a call saying it’s from Telstra, we believe it’s Telstra,” Ms Pajmon said when asked why Australians seemed particularly likely to fall victim to fraudsters. “We don’t question authority. We trust people when we talk to them. And what I say when I do education sessions is: It’s OK to be cynical. It’s OK to verify, to hang up. “If we’re getting a message from Telstra or our daughters or our parents, we go, ‘great’. And then we validate and call them. We literally call the number that we know we can contact them on directly. And that’s where, unfortunately, I think where we’ll be heading to. It won’t be a high-trust world in the digital space.” A spokesperson from the National Anti-Scam Centre said while reports of AI fraud had so far been low, there was a “growing sophistication in scams”. “The use of AI makes scams harder for the community to identify and means that it is more difficult to identify if a scammer has used AI when these scams are reported to Scamwatch,” the spokesperson said, adding the centre was seeing AI being used to trick people on social media. “We have received reports of scammers employing AI in the form of ‘chatbots’ on social media sites. This is primarily occurring in relation to job scams and investment scams. The bots are used to give the impression that many other real people are interested in the product, and are receiving financial benefit from the scam,” the spokesperson said. “AI is also likely to be used to make phishing scams more authentic and harder for the public to detect.” A recent Bankwest report revealed a 129 per cent increase in phishing scams over the past financial year, accounting for 69 per cent of all scams reported. Phishing includes things like emails that look like they are from a trusted institution but are actually just criminals, or text messages claiming you have an unpaid road toll or “points” at Coles or Woolworths that need to be used. “They are successful because we’re time-poor,” Ms Pajmon said of the text message phishing scams. “None of us want to have overdue bills. They’re also taking advantage of our economic uncertainty at the moment, and every dollar counts — those Coles points count. “And they’re taking advantage of us rushing because, when you’re working full-time, you have family commitments, you just want to just quickly pay the bill and deal with it. So they prey on that, which is why they’re very successful.” The National Anti-Scam Centre has a three-point checklist to help people identify and avoid scams: stop, check, report. “Scammers will create a sense of urgency. Don’t rush to act. Say no, hang up, delete,” the centre advises. Ms Pajmon said businesses across the country were using AI to tackle the problem. “Whilst obviously the criminals are taking advantage of (AI), I don’t know any business in Australia that’s not already looking at investing in AI,” she said, adding we can all expect more verification steps in the future when we do anything online. She says fighting scammers is an issue businesses and government must work on together to tackle. “The banking system alone can’t solve this. So in terms of the scams, we’re really supporting the government ecosystem approach,” Ms Pajmon said. “It’s going to be social media, telecommunications, internet providers, as well as the banking, the community and regulators all stepping in to really take critical action. “We need to actually build fortress Australia and make Australia less of a target for scams.”