Artificial intelligence transforms financial fraud at an alarming rate, which makes scams more sophisticated and more difficult to detect. While fraud attempts have increased by 80% in the past three years, only 22% of companies have set up defenses fueled by AI. Stuart Wilkie, head of commercial finance at Anglo Scottish Finance, explores the landscape of evolving threats and the way in which institutions – and individuals – can fight back.
The fight against financial fraud has become more difficult than ever in recent years, thanks to the growing prevalence of AI (artificial intelligence). A recent signal report highlighted the prevalence of AI in the disorder world of financial fraud, suggesting that AI now represents 42% of all attempts at financial fraud – while only 22% of companies have in Place of AI defenses. This disconnection is disturbing, but unfortunately, it is not new.
Before and after the introduction of Chatgpt, the most popular IA chatbot in the world, at the end of 2022, the use of AI in financial fraud tactics was increasing. A 2022 CIFAS report revealed an increase of 84% in the number of cases where AI was used to try to attack banking safety systems.
The AI allowed Grifters to more easily carry out their fraudulent activity, which in turn resulted in an increase in the overall incidence of fraud. The signal report has also revealed that the volume of fraud attempts increases rapidly, with 80% total fraud attempts in the past three years. This is partly due to the role that AI plays to facilitate the end of financial fraud, but is also attributable to external factors.
So, what are some of the most common forms of financial fraud fueled by AI and how to fight against AI fraud at the individual and institutional level?
The majority of financial fraud helped by AI can be classified as synthetic identity fraud. As part of this scam, fraudsters use AI to create false identities made up of a combination of real and false information, before registering for loans, credit lines or even requesting benefits.
The AI capacity to quickly identify models within large data sets has given fraudsters the ability to create realistic profiles that align with demographic trends. The generative AI is also used in the identity creation process, simulating a realistic credit history. These profiles are therefore almost impossible to distinguish real people under standard verification checks.
A report from the US Government Accountability Office (GAO) estimates that Over 80% Fraud of new accounts can be attributed to synthetic identity fraud – indicating the vital importance of improving security measures.
The growing adoption of biometrics as a security measure has reduced our dependence on passwords. For many people, it has made life easier – there is less pressure to remember a different passage of passwords, knowing that your face or fingerprint is enough to connect to your mobile banking services or to your social media.
However, the generating AI allowed fraudsters to bypass these mechanisms through Fafaking deeply (images, audio or video which are published or generated with AI, representing real or non -existent people).
When combined with other identification factors – such as the national insurance number of a person or the first address line – Deepfakes are increasingly gaps in the security measures of financial institutions , giving fraudsters access to bank accounts and much more.
In addition to helping crooks, banking customers to access their accounts, the generator also helps customers to target by usurping the identity of customer service representatives. In past days, locating SMS or fraudulent emails was generally easier – they could have spelling errors or grammar problems, or be written in a tone of voice that was not aligned with your bank.
Now that crooks use generative AI chatbots, the generation of an email that sounds exactly as if your bank is much easier-they can easily correspond to the company’s email tone and do not will never make a spelling error.
This side of financial fraud also extends far beyond e-mails-there have also been a number of cases of scams creating false web websites using content generated by AI and designing The pages to imitate that of a reliable bank.
Fortunately, just as fraudsters use AI to commit fraud, banking and financial establishments use automatic learning to detect fraudulent activities – and gradually improve to do so. HSBC, for example, has teamed up with Google in 2021 to develop an AI system to detect financial crime.
Their dynamic risk assessment system becomes more and more precise; Initially, the false positives were common, but these 60% reduced Between 2021 and 2024. The more precise these systems become, the more likely we are to completely eliminate financial fraud.
Generally, banks do a good job to support their biometric systems against deep fires – the more they detect crooks via their own automatic learning algorithms, the more they can identify them quickly.
It is not only a question of fighting against fraud at the institutional level. A part to ensure that fraud does not take place in the first place concerns education – teaching customers of banks to identify new and development scams to avoid being caught.
With AI and other technological advances modifying the landscape of fraud almost daily, however, this can be difficult. If individuals receive communications from their bank by e-mail, a telephone call or any other method, they must question what they are really asked to do. Most banks will never ask you any specific details, so people must make sure they are formed at any time.
Stuart Wilkie, head of commercial finance at Anglo Scottish
“AI against finance: the battle against degeneration fraud” was originally created and published by Praise lifeA brand belonging to GlobalData.
Information on this site was included in good faith for general information only. It is not intended to constitute advice on which you should count, and we do not give any representation, guarantee or guarantee, whether express or implicit as to its precision or its exhaustiveness. You must obtain professional or specialized advice before taking or abstaining from any action on the basis of the content of our site.