Financial Institutions Embrace AI Despite Governance Challenges
A recent report by Hawk and Chartis reveals that 90% of financial institutions actively promote the integration of artificial intelligence in their financial crime and compliance (FCC) operations. However, a significant challenge has surfaced: the governance of the AI models utilized.
The survey, which included 125 compliance and risk leaders from banks around the world, indicated that over half of the technical barriers preventing institutions from expanding AI within their anti-financial crime initiatives can be traced back to issues of model governance. While creating a model is foundational, the more complex tasks of validating, operationalizing, and maintaining these models over time present far more significant hurdles, often exceeding the resources available to most teams.
Concerns About Data Quality Remain Predominant
Data quality is the foremost concern, as highlighted by 91% of respondents who included it in their top five issues. Inadequate or low-quality training data allows models to absorb irrelevant noise alongside meaningful signals, leading to unnecessary false positives. Moreover, regulatory agencies are increasingly mandating that institutions prove the fitness of the data supporting their models, thereby establishing data quality as both a governance and technical imperative.
The second most pressing issue, identified by 86% of participants, revolves around integration with existing systems. A well-developed model becomes ineffective if it fails to seamlessly connect with the systems that provide data or utilize its outputs. Such integration deficits not only delay implementation but also create manual workarounds, complicating consistent model behavior and governance documentation.
Additionally, interpreting or trusting model outputs remains a point of concern for 83% of respondents. When compliance teams cannot grasp why a model flagged a particular transaction, they struggle to act confidently or to clarify their decisions to auditors or regulators. The report emphasizes that explainability has transitioned from being an optional feature to a necessary aspect of governance, as opaque models can undermine the crucial human oversight essential for effective financial crime controls.
Governance Challenges Intensify Post-Deployment
The report also investigates how challenges evolve once models transition from pilot phases into full production. While pre-deployment worries about data quality and integration persist, new challenges arise at scale. Approximately 43% of respondents expressed increased anxiety about the difficulty of updating live models. With data science teams often stretched thin, updates can become sluggish and reactive, leaving institutions vulnerable to emerging threats that their models were not designed to identify.
Additionally, 38% highlighted the growing challenge of sustaining governance across an expanding portfolio of models. Maintaining accurate documentation, version control, and audit trails becomes significantly more complex as the number of deployed models increases. Furthermore, 33% of respondents noted that the challenges of interpreting and trusting model outputs continue well beyond the initial deployment stage.
Key Elements of Effective Model Governance
The report delineates three foundational pillars for effective model governance aimed at FCC teams. The first is comprehensive documentation throughout the entire model development lifecycle, rather than solely when required by regulators. This documentation should detail the model’s objectives, data sources, performance metrics, and any modifications made over time.
The second pillar emphasizes the need to build genuine trust in model outputs, ensuring teams can understand and justify the decisions produced by their models. Explainable AI is rapidly evolving from a desirable feature to a regulatory expectation. The final pillar stresses the importance of maintaining model performance post-deployment through systematic retraining processes. Models developed on historical data lose relevance without adequate review and update protocols integrated into their governance frameworks.
In response to these governance challenges, Hawk’s Analytics Studio platform provides automated documentation, human-readable decision-making explanations, and empowers compliance teams to independently retrain models without the necessity of additional data science resources.
For more insights, read the full report here.
