10th November 2025
Hilton London Canary Wharf
10th November 2025
Hilton London Canary Wharf
Search
Close this search box.
Sum & Sub
SiftScience
EC_banner_web

AI Financial Fraud: How fraudsters are using AI (and how to combat it)

Tackling financial fraud has become more difficult than ever in recent years, thanks to the increasing prevalence of AI (artificial intelligence) in financial fraud. A recent report from Signicat has highlighted the prevalence of AI in the murky world of financial fraud, suggesting that AI now accounts for 42% of all financial fraud attempts – while just 22% of firms have AI defences in place. This disconnect is worrying, but sadly, it’s nothing new. 

Both before and after the introduction of ChatGPT, the world’s most popular AI chatbot, in late 2022, the use of AI in financial fraud tactics has been on the increase. A 2022 report from Cifas found an 84% increase in the number of cases where AI was used to try and attack banks’ security systems. 

AI has made it easier for grifters to carry out their fraudulent activity, which has in turn resulted in an increase in overall fraud incidence. Signicat’s report also uncovered that the volume of fraud attempts is increasing rapidly, with total fraud attempts up by 80% over the last three years. This is in part due to the role AI plays in making it easier to complete financial fraud schemes, but is also attributable to external factors. 

Here, we take a look at some of the most common forms of AI-fuelled financial fraud, with input from Stuart Wilkie, Head of Commercial Finance at Anglo Scottish Finance, who provides insight on how to combat AI fraud at an individual and institutional level… 

Synthetic identity fraud 

The majority of AI-aided financial fraud can be categorised as synthetic identity fraud. Under this scam, fraudsters use AI to create fake identities comprised of a combination of real and fake information, before signing up for loans, lines of credit or even applying for benefits.  

AI’s ability to quickly identify patterns within large datasets has given fraudsters the ability to create realistic profiles that align with demographic trends. Generative AI is also used in the identity creation process, simulating a realistic credit history. These profiles are therefore near-impossible to distinguish from real people under standard verification checks. 

A report from the U.S. Government Accountability Office (GAO) estimates that more than 80% of new account fraud can be attributed to synthetic identity fraud – indicating the vital importance of improving security measures. 

Deepfaking 

The growing adoption of biometrics as a security measure has reduced our reliance on passwords. For many people, it’s made life easier – there’s less pressure to remember umpteen different passwords, knowing that your face or your fingerprint is enough to sign into your mobile banking or social media. 

However, generative AI has made it easier for fraudsters to bypass these mechanisms through deepfaking (images, audio or video that are edited or generated with AI, depicting real or non-existent people). 

When combined with other identifying factors – such as an individual’s national insurance number or first line of address – deepfakes are increasingly finding gaps in finance institutions’ security measures, giving fraudsters access to bank accounts and much more. 

Fake customer service

As well as helping scammers impersonate banking customers to gain access to their accounts, generative AI is also helping target customers by impersonating customer service representatives. In days gone by, spotting fraudulent text messages or emails was typically easier – they might have spelling mistakes or grammar issues, or be written in a tone of voice that was not aligned with your bank. 

Now that scammers are using generative AI chatbots, however, generating an email that sounds exactly like your bank is far easier – they can match the corporate email tone with ease and will never make a spelling mistake.

This side of financial fraud extends far beyond just emails, too – there have also been a number of instances of scammers creating entire fake websites using AI-generated content and designing the pages to mimic that of a trustworthy bank. 

Combatting AI fraud 

Thankfully, just as fraudsters are using AI to commit fraud, banking and finance institutions are using machine learning to detect fraudulent activity – and getting progressively better at doing so. HSBC, for example, partnered with Google in 2021 to develop an AI system for detecting financial crime. 

Their Dynamic Risk Assessment system is becoming increasingly accurate; initially, false positives were common, but these reduced by 60% between 2021 and 2024. The more accurate these systems become, the better chance we have of eliminating financial fraud altogether. 

“Generally, banks are doing a good job of shoring up their biometric systems against deepfaking – the more scammers they detect via their own machine learning algorithms, the quicker they’ll be able to identify them,” says Wilkie.

“It’s not just about combatting fraud at an institutional level, however,” he continues. “Part of ensuring that fraud doesn’t take place in the first place is about education – teaching banks’ customers to spot new and developing scams in order to avoid being caught out.

“With AI and other technological advances changing the fraud landscape on an almost daily basis, however, this can be challenging. If you receive communications from your bank via email, phone call or any other method, always be sure to interrogate what they’re actually asking you to do. Most banks will never ask you for specific details, so make sure you’re clued up at all times.”

Photo by Aidin Geranrekab on Unsplash

YOU MIGHT ALSO LIKE

Leave a Reply

Your email address will not be published. Required fields are marked *