Stay informed with free updates
Simply register at British banks myFT Digest – delivered straight to your inbox.
Lenders’ growing use of generative artificial intelligence is creating new risks for the financial system and could be incorporated into annual stress tests that examine the sector’s resilience, a deputy governor of the Bank of England has said.
“The power and use of AI is growing rapidly, and we must not be complacent” Sarah Breeden said Thursday, adding that while Britain’s central bank was concerned, it was not prepared to change its approach to regulation. Generative AI.
About 75 percent of financial firms use this rapidly evolving technology — up from 53 percent two years ago — and more than half of use cases involve some degree of automated decision-making, according to a recent survey from the BoE.
Generative AI systems generate text, code and videos in seconds, and Breeden said the central bank was concerned that, when used for commercial purposes, AI could lead to “sophisticated forms manipulation or more crowded transactions in normal times which exacerbate market volatility in normal times. stress.”
The BoE could use its annual stress tests to British bankswhich assess how prepared lenders are for different crisis scenarios, “to understand how AI models used for banking or non-banking transactions might interact with each other,” said Breeden, who oversees financial stability at the central bank.
The BoE is setting up an “AI consortium” with private sector experts to study the risks.
Breeden warned: “When such numerous trades are financed by leverage, a shock that results in losses for such trading strategies could be amplified into more severe market stress through feedback loops of forced selling and adverse price movements. »
His comments at a conference in Hong Kong follow the IMF’s warning in its financial stability report last week that AI could lead to faster swings in financial markets and greater volatility in the event tensions.
Breeden, who took office in November last year, said rules making senior bankers more accountable for the areas they oversee could be adjusted to ensure they are held accountable for decisions made autonomously by AI systems.
“We particularly need to ensure that leaders of financial companies are able to understand and manage what their AI models are doing as they evolve autonomously beneath their feet,” she said.
While most uses of AI in financial services were “fairly low risk from a financial stability perspective…”. . “more important use cases from a financial stability perspective are emerging,” such as credit risk assessment and algorithmic trading.
In its survey, the central bank found that 41% of companies surveyed used AI to optimize internal processes, more than a quarter for customer service and at least a third to combat fraud.
AI is used to assess credit risk by 16 percent of companies, and another 19 percent say they plan to do so in the next three years, according to the survey.
Eleven percent of groups were using the technology for algorithmic trading, and an additional 9 percent planned to adopt it for this work over the next three years.
Breeden said half of financial firms’ uses of AI were roughly evenly split between “semi-autonomous decision-making,” with some human involvement, and fully automated processes without human involvement.
“This clearly poses challenges to the management and governance of financial companies, as well as supervisors,” she said.