An inter-ministerial task force has published an interim report inviting public comment on the use of artificial intelligence (AI) in the financial sector. Created at the end of 2022, the team aims to prepare for the growing integration of AI in finance and to establish guiding principles for financial regulation on the subject.
Although AI has many benefits in finance, concerns remain about its potential misuse, including the risks of fraud, misinformation and privacy violations, which highlight the need for regulatory oversight. The team includes representatives from the Ministry of Justice, the Ministry of Finance, the Competition Authority, the Securities Authority, the Capital Market Authority and the Bank of Israel. The report is open for public comment until December 15.
The team’s main position is that AI should be encouraged in the financial sector due to numerous benefits, such as reducing operating costs, improving the quality of products and services, expanding financial accessibility and assistance to financial entities in compliance and enforcement of regulations. However, the use of AI also carries risks in terms of transparency, privacy and reliability. Additionally, specific risks to financial stability, such as the potential for AI to trigger harmful “herd behavior” (e.g. mass buying or selling of securities or sudden withdrawals from banks), have been identified. The risks of cybersecurity, financial fraud, misinformation and competition concerns, particularly if access to advanced AI is limited to dominant financial entities, are also discussed.
As noted in the report, the team emphasizes a risk-based regulatory approach, in which the level of supervision is tailored to the importance of the financial service and its impact on the customer. For example, an AI chatbot providing basic customer service would be subject to limited regulatory requirements, while an AI-based credit underwriting system, having a significant effect on individuals, would be subject to stricter regulation .
One recommendation addresses the “black box” problem – the difficulty of fully explaining how AI systems make decisions. The team suggests distinguishing between general transparency about how the AI system works and specific explanations of individual decisions. They recommend a general disclosure requirement for all AI systems, with additional specific disclosure requirements based on factors such as human involvement in the process.
Human involvement is a key consideration in using AI; increased human oversight may reduce the effectiveness of AI. To address this issue, the team proposes a “progressive model of human involvement,” balancing general oversight and direct involvement in medium-to-high-risk decision-making.
The report identifies three financial areas where AI is already being applied: investment advisory and portfolio management, bank lending, and underwriting and insurance.
In investment advice and portfolio management, AI offers the benefit of expanding access to investment services. However, risks such as failure to uphold fiduciary responsibilities, “gamification” which may encourage risky behavior, potential declines in service quality, and reliance on a few dominant systems are noted. A key recommendation is to update the 2016 “Online Services Guidance” to address both terminology (e.g. defining “generative” and “explanatory” AI) and substantive requirements, such as the clarification of the roles of licensees in evaluating the results of the system.
For credit underwriting, the team suggests relying on existing regulations, deemed suitable to meet AI challenges. Still, concerns remain about the “credit push,” in which AI could promote excessive borrowing. To mitigate this issue, disclosure requirements on the use of AI in credit underwriting are recommended to ensure transparency for both customers and regulators.
In underwriting and insurance, AI can improve the alignment of premiums with risk through advanced modeling. The recommendation is to maintain the current regulatory framework for risk management and consumer protection, including privacy safeguards, while updating it as necessary to address AI-specific risks, such as the management of Risks related to models, disclosure and customer information when AI is used in customer interactions. , as with chatbots.