AI heralds the next generation of financial scams
Voice cloning is just one of the new tools in the tricksters’ armoury
www.ft.com
AI heralds the next generation of financial scams
Voice cloning is just one of the new tools in the tricksters’ armoury© Andy Carter
Siddharth Venkataramakrishnan
YESTERDAY
11Print this page
It was last spring when Paddric Fitzgerald received a telephone call at work. He had been playing music via his phone, so when he picked up, the voice of his daughter screaming that she had been kidnapped erupted over the speakers.
“Everyone has those points in their lives like ‘Oh, that moment I almost drowned as a kid’,” he says. “It was one of the most emotionally scarring days of my life.”
Declining an offer of a firearm from a colleague, Fitzgerald, a shop manager based in the western US, raced to get cash out from a bank, while staying on the phone.
“[My daughter] was screaming in the background, saying they’d cut her while I was waiting in line,” he says. “I was going to give everything that I have financially.”
It was only a chance text from his daughter that revealed that the voice on the phone didn’t belong to her. It was a remarkably cruel and elaborate scam generated with artificial intelligence.
Fitzgerald’s story is a terrifying example of how AI has become a powerful new weapon for scammers, forcing banks and fintechs to invest in the technology to keep pace in a high-tech arms race.
“I had no protection over my child in that moment — a year later, I’d love to find that person and just make them realise how evil what they did was, and they did it with keystrokes,” says Fitzgerald. “Are we really that advanced as a society if we can do that?”
The continued evolution and uptake of the technology means scammers do not just pose a threat to the unaware or vulnerable. Even cautious consumers are at risk of huge financial losses from AI-powered fraud. FT Money explores the latest developments.
Increasing sophistication
Identifying the scale of AI use by scammers is a difficult task, says Alex West, banking and payments fraud specialist at consultant PwC. He was one of the authors of a reportinto the impact of AI on fraud and scams last December in collaboration with cross-industry coalition Stop Scams UK. This identified the kind of “voice cloning” that targeted Fitzgerald as one of biggest ways in which criminals are expected to use AI.“Scammers are already very successful, and it could be that they just don’t need to use this type of tech, or it could be that they are using AI and we just aren’t able to distinguish when it has been used,” he says. “[But] it’s clearly going to drive an increase in more sophisticated scam attempts.”
Steve Cornwell, head of fraud risk at high street lender TSB, says the rising sophistication of the technology was a major worry for banks.
“If you think of the way Generative AI is coming along, how long [is it] before that AI solution could have a real-time conversation with you [using] a synthetic voice?” he says.
Figures from banking industry trade body UK Finance show a welcome trend, with fraud losses falling by 8 per cent year on year in 2022.
But one senior politician who did not wish to be named says that increased adoption of AI — OpenAI’s ChatGPT reached around 100mn monthly users in two months — could reverse that trend.
“Scammers are very well financed and entrepreneurial,” the person says. “That’s the thing I’m concerned about.”
Data from Cifas, a not-for-profit fraud prevention service in the UK, also gives cause for concern. While data from 2022 shows identity fraud rose by nearly a quarter, reports of AI tools being used to try and fool banks’ systems increased by 84 per cent.
“We’re seeing an increased use of deepfake images, videos and audio being used during application processes, along with synthetic identities being identified as a result of ‘liveness’ checks that are now being carried out at the application stage,” warns Stephen Dalton, director of intelligence at Cifas.
Speaking at Davos on Wednesday, Mary Callahan Erdoes, JPMorgan’s head of asset and wealth management, said the use of AI by cyber criminals was a big concern. The bank spent $15bn on technology annually in recent years and employed 62,000 technologists, with many focused solely on combating the rise in cyber crime.
“The fraudsters get smarter, savvier, quicker, more devious, more mischievous,” she added.
PwC and Stop Scams also identified artificially generated videos, better known as deepfakes, as a major risk. The technology, which only emerged in 2019, has rapidly advanced, says Henry Ajder, an expert on AI-generated media, who has advised companies including Meta, Adobe and EY.
“What’s happened in the last 18 months is the equivalent of like two decades of progress compared to the previous four years,” he says. “The barrier to entry is much lower than it was.”
The quality of these videos has improved remarkably, says Andrew Bud, chief executive and founder of online identity verification provider iProov. He points to a recent study which found that more than three-quarters of participants were unable to identify deepfakes.
“Good quality deepfakes cost about $150 on the dark web,” he continues. “You have a whole supply chain developing for AI-supported fraud, with R&D departments who build sophisticated tools and monetise them on the dark web.”
Natalie Kelly, chief risk officer for Visa Europe, warns there is a constellation of criminal-focused systems, such as WormGPT, FraudGPT and DarkBART. She says: “It can be hard to tell the authentic from the artificial these days.”
Using those tools, available via the dark web, and communicating via dedicated hacking forums and messaging apps, criminals are able to offer malware-writing services or advanced phishing emails.