Automation’s Growing Role in Financial Services
Automation is increasingly moving from the periphery of financial services to its operational core. Surveillance systems are now capable of flagging misconduct, while onboarding platforms assess risk more efficiently. Artificial intelligence models are also playing a pivotal role in recommending—or even executing—compliance actions. Yet, as these automated systems become integral to decision-making, one crucial question arises: who ultimately owns these decisions?
The Complexity of Accountability
Traditionally, accountability in financial services was straightforward. Human beings made judgments and documented their decisions, allowing regulators to clearly identify where responsibility lay. However, the emergence of automation complicates this established model.
Defining Accountability in a Tech-Driven Landscape
A pressing issue in the RegTech industry is how organizations are defining accountability when compliance decisions are either partially or fully automated. Janet Bastiman, chief data scientist at Napier AI, emphasizes that responsibility continues to rest with the human operator, even in cases of complete automation. She advocates for a human-in-the-loop approach, noting that regulators have made it clear that accountability remains in human hands, even as AI is integrated into Anti-Money Laundering (AML) frameworks.
Perspectives from Industry Leaders
Stephen Lovell, CPTO at RegTech firm Vixio, echoes this sentiment, affirming that accountability has not shifted; instead, it resides with the regulated entity, its senior managers, and ultimately the board. He states unequivocally, “Automation does not transfer liability; rather, it alters how inputs are gathered and processed.” Lovell encourages organizations to view AI as a decision-support tool rather than a decision-maker.
The Integration of Technology and Human Oversight
Scott Parkin, the head of US operations at Zeidler, highlights the necessity for accountability in compliance frameworks, saying that all financial businesses must have established procedures that underline this principle. As LegalTech and RegTech evolve, firms face the challenge of integrating new technologies into existing policies that still emphasize human oversight. Parkin explains, “The key is determining which aspects of compliance can be automated without undermining accountability.” In line with this, Scott Nice, CRO of Label, acknowledges the growing confidence organizations have in automating execution, but notes they often struggle to articulate who holds the accountability when these automated outputs are generated.
Navigating Regulatory Challenges
When automated decisions are contested, regulatory bodies require firms to provide clear explanations regarding their data sources, processing logic, and whether consistent outcomes can be replicated. Lovell argues that effective communication around these three layers is essential for moving away from opaque “black box AI” to a more explainable form of automation. Should regulators raise concerns about automated outcomes, Nice asserts that the focus must shift from technology to governance, emphasizing the need for firms to justify the configurations of their automated systems.
Ensuring Sufficient Human Oversight in Compliance
An important consideration remains: how much human oversight is adequate in automated compliance workflows? Bastiman stresses that transparency, explainability, and auditability should be mandatory, advocating for risk-based oversight proportional to the complexities involved. Lovell supports a similar stance, suggesting that high-impact regulatory tasks warrant detailed human rationale, thereby ensuring that decision-making reflects a nuanced understanding of compliance challenges, rather than relying solely on algorithmic outputs.
The Evolution of Governance Frameworks
One of the greater challenges facing the industry today is whether governance frameworks are keeping pace with rapid automation advancements. Bastiman believes that while some frameworks have been slow to adapt, the transition to outcomes-based approaches is helping to alleviate the burden. However, Parkin expresses concern that existing policies often lag behind technological innovation, complicating compliance efforts further. He notes that compliance teams have long understood the level of human oversight necessary for technology deployment, but questions whether AI technologies may necessitate a new level of human engagement.
Closing the Accountability Gap
The accountability gap in AI-driven compliance is often less about the technology itself and more about the transparency of its underlying models. Andrew Davies, global head of FCC strategy at Comply Advantage, highlights the need for defensible and transparent automation, emphasizing that the responsibility for compliance ultimately lies with the firm. He posits that while automation can enhance efficiencies in handling low-risk cases, it remains imperative for firms to maintain human involvement in higher-stakes decisions, ensuring that compliance officers remain engaged throughout the workflow.
