Stay informed with free updates
Simply register at Cybersecurity myFT Digest – delivered straight to your inbox.
Business executives are facing an influx of hyper-personalized phishing scams generated by artificial intelligence bots as rapidly developing technology facilitates advanced cybercrime.
Leading companies such as UK insurer Beazley and e-commerce group eBay have warned of a rise in fraudulent emails containing personal information likely obtained through online profile analysis by AI.
“The situation is getting worse and very personal, and that’s why we suspect AI is behind a lot of this,” said Kirsty Kelly, head of information security at Beazley . “We’re starting to see very targeted attacks that have been able to extract an immense amount of information about an individual.”
Cybersecurity Experts say the increase in attacks comes at a time when AI technology is rapidly advancing, as tech companies race to create ever more sophisticated systems and launch products popular with consumers and businesses.
AI bots can quickly ingest large amounts of data about a company or individual’s tone and style and replicate those characteristics to create a convincing scam.
They can also analyze a victim’s online presence and social media activity to determine which topics they are most likely to respond to, helping hackers generate tailored phishing scams at scale.
“The availability of generative AI tools lowers the entry threshold for advanced cybercrime,” said Nadezda Demidova, cybercrime security researcher at eBay. “We have seen an increase in the volume of all kinds of cyberattacks,” particularly “refined and narrowly targeted” phishing scams, she added.
Kip Meintzer, an executive at security firm Check Point Software Technologies, said at a recent investor conference that AI has given hackers “the ability to write a perfect phishing email.”
More than 90% of successful cyberattacks begin with a phishing email, according to the U.S. Cybersecurity and Infrastructure Security Agency. As these attacks become more sophisticated, their consequences have become increasingly costly, with the global average cost of a data breach increasing by almost 10% to $4.9 million in 2024, according to IBM.
Researchers have warned that AI is particularly effective at creating business email compromise scams – a specific type of malware-free phishing in which fraudsters trick recipients into transferring funds or disclosing confidential company information . This type of scam has cost victims around the world more than $50 billion since 2013, according to the FBI.
AI is “used to analyze everything to see where a vulnerability is, whether in the code or in the human chain,” said Sean Joyce, global head of cybersecurity at PwC.
Phishing scams generated using AI may also be more likely to bypass companies’ email filters and cybersecurity training.
Basic filters, which typically block repeated mass phishing campaigns, could struggle to detect these scams if AI is used to quickly generate thousands of reworded messages, eBay’s Demidova said.