On December 19, the U.S. Department of the Treasury issued a report summarizing the key findings of its 2024 Request for Information (RFI) on the uses, opportunities and risks of artificial intelligence (AI) in financial services. The report notes the growing prevalence of AI, including generative AI, and explores the opportunities and challenges associated with its use.
Expanding AI Integration Opportunities
The Treasury report highlights the transformative role of AI, particularly emerging generative AI technologies, in financial services. Financial institutions are increasingly leveraging AI for tasks such as credit underwriting, fraud detection, customer service and regulatory compliance. For example, financial companies are using AI to analyze alternative data, such as rent and utility payments, to expand access to credit to underserved communities. Generative AI models, capable of processing unstructured data such as customer communications, also improve operational efficiency and customer engagement. In particular, the report highlights the potential of AI to automate processes, reduce costs and increase access to financial products for historically underserved populations.
Navigating AI Risks
The report addresses the following risks associated with the deployment of AI in financial services, which draw on learnings from the Department of Treasury. March 2024 AI Cybersecurity Report.
- Confidentiality and data bias. Ensuring the quality, security and fairness of the data used to train AI models remains a major concern. Poorly trained AI models risk reinforcing historical biases, potentially leading to discriminatory outcomes in credit and lending decisions.
- Explainability and transparency. The complexity of AI models, particularly generative AI, often results in “black box” systems, making it difficult for companies to explain decision-making processes. This opacity could lead to increased regulatory oversight and an erosion of consumer confidence.
- Reliance on third parties. Many financial institutions rely on external AI vendors for their tools and infrastructure. This dependence increases the risks of concentration, with a few large companies dominating the market for advanced AI models.
- Illicit financing. The report also warns that AI tools could be used for fraudulent purposes, such as generating deepfake content or enhancing phishing attacks.
Policy recommendations and next steps
To address these challenges, the report outlines potential next steps that Treasury, government agencies and the financial services industry should consider, including enhanced collaboration between governments, regulators and financial entities to establish consistent standards for AI, stronger regulatory frameworks, industry-wide standards. data standards and best practices, as well as better compliance monitoring.
Put into practice: Federal agencies continue to assess the risks of AI in the financial services sector (see our previous discussions on federal AI regulation here And here). This report demonstrates Treasury’s dual focus on promoting AI-driven innovation in financial services while mitigating its risks. Financial institutions should prioritize reviewing their use of AI to ensure compliance with consumer protection laws, fair lending principles, and data privacy standards.