Moody’s Analytics I recently took a deeper dive into the role AI regulation will play as the technology changes risk and compliance.
Artificial intelligence (AI) is poised to revolutionize the fields of compliance and risk management. With laws on the safe application of AI technologies in sight, several companies are proactively developing policies that promote responsible and ethical use of AI, anticipating future regulatory frameworks.
A comprehensive Moody’s study, encompassing feedback from 550 compliance and risk management leaders in 67 countries, reveals a strong consensus: nearly 70% believe AI will significantly influence their practices.
Despite this, the integration of AI into risk and compliance roles remains limited, although early adopters hail its positive impact, citing improvements in the efficiency of manual processes (17%) and staff performance (27%).
However, concerns persist among these leaders, particularly regarding confidentiality and data protection (55%), transparency of decisions (55%) and the risks of misuse or misunderstanding (53%). These concerns underline the vital need for regulation to ensure the safe and responsible deployment of artificial intelligence.
The emerging regulatory landscape is diverse and dynamic. Regulatory laws are being drafted in several countries, including the US, Europe, and the UK. China stands out as one of the few countries that has finalized laws strengthening security around GenAI and establishing oversight agencies.
The US is taking a voluntary approach with an AI Risk Management Framework, focusing on safety, security, privacy, fairness and civil rights, complemented by an AI Bill of Rights published by the White House. The EU approach categorises AI systems by risk, requiring assessment and reporting for high-risk systems. The UK is encouraging existing authorities to adopt sector-specific regulatory measures.
Surprisingly, professionals surveyed by Moody’s are poorly informed about these regulatory efforts. Only 15% consider themselves well informed, while a third say they are not aware of them. This contrasts sharply with the strong demand for new laws on the use of AI, a sentiment shared by 79% of respondents, widening the gap between regulatory developments and industry awareness.
Respondents urge regulators to prioritize privacy and data protection (65%), accountability (62%), and transparency (62%). They advocate for global consistency in regulations, requiring transparency and human oversight of AI-based outcomes. Regulations must be adaptable, recognize the rapid evolution of artificial intelligence, and adopt risk- and principles-based approaches to effectively combat financial crime.
Forward-thinking organizations aren’t waiting for regulations. Many are aligning their AI strategies with broader ethics and risk management frameworks, knowing that future regulations will require such policies. Responsible AI policies now include accountability, require human validation of AI-influenced decisions, and emphasize transparency and explainability. They promote strong data governance and privacy protections, ensuring appropriate control over access to data.
In the area of combating financial crime, initiatives such as the Wolfsberg Group’s five principles for the use of artificial intelligence are emerging, emphasizing legitimacy, proportionate use, and expertise in AI applications. Despite these advances, challenges remain in explaining AI-based decisions to regulators, determining acceptable levels of human involvement, and controlling explainability, privacy, and bias.
Read the full article here.
Stay up to date with all the latest news FinTech News Here.
Copyright © 2023 FinTech Global.
Investors
The following investors have been identified in this article.