AI is making lending decisions, flagging fraud, and managing investment portfolios. Regulators are catching up. And the fintechs caught in the middle are trying to build compliance frameworks for rules that keep shifting under their feet.
We asked industry leaders what compliance challenges they’re seeing on the ground as AI takes on a bigger role in financial decision-making. Their answers paint a picture of an industry navigating real risk with incomplete guidance.
The data problem comes first
For investment firms, the starting point is straightforward: where is client data going?
The EU AI Act classifies AI used for credit scoring as high-risk under Annex III. That means mandatory lifecycle risk management, tamper-resistant logging, and technical documentation. Those obligations kick in August 2026. In the US, Colorado’s SB 24-205 requires lenders to disclose how AI makes lending decisions, with enforcement beginning mid-2026. The SEC has made AI its top examination priority for 2026, specifically looking at whether firms protect client data when using third-party AI tools.
The fiduciary obligation hasn’t changed. The tools have. And the question David Csiki keeps coming back to is simple: can your firm prove that client data stays private when it touches an LLM? If the answer is no, or even “probably,” that’s a red flag.
Firms that use external AI APIs risk sending sensitive portfolio data, client identifiers, and trading signals to third-party servers. Only 19 percent of financial services firms currently use data loss prevention tools for generative AI. The rest are flying blind.
“FinTech’s face several compliance challenges when implementing AI-based approaches for financial decision making. With regards to investment management in general, the most important consideration for compliance professionals involves data security and data privacy. For investment management firms, this is fundamental to their fiduciary obligations for clients, namely safeguarding client data. The best practice in this area for investment firms is to apply their existing data security and data privacy standards to all AI-based approaches that are being considered or used within the investment firm. The key question to ask is whether sensitive client data is being exposed to AI-based tools, including large language models (LLMs). Every effort should be made to keep client data private and if this cannot be demonstrated by a given AI-based approach, investment firms should re-evaluate and look to vendor solutions that safeguard client data with demonstrable methods. Next is the area of governance which is key for compliance. AI-based approaches and tools should have a robust set of controls and governance layer built into their solutions. Investment firms should seek to understand how an AI-based approach works on a technical level. Key controls include being able to ‘shut down’ an AI tool on an ‘ad hoc’ basis (i.e. a ‘kill switch’) and be able to specify what a given AI tool can be used for. Use cases for investment firms involving AI include investment research, data analysis, data formatting and output (i.e. reporting) and agentic AI for administrative tasks related to investment management. Firms may seek to have limited application of AI or a comprehensive set of use cases depending on their fiduciary responsibilities to clients and overall risk tolerance. Another important consideration is interoperability of AI-based approaches. For example, different LLMs can be assessed based on the individual investment firm’s standard for risk. Based on that, a given LLM may not be fit-for-purpose for a given use case. Additionally, as AI tools like LLMs continue to evolve rapidly, firms need the ability to switch quickly and easily from one provider to another based on industry events and situations that raise the risk profile of a given LLM. By assessing AI through the framework of compliance, financial firms using fintech solutions involving AI will be able to successfully prepare themselves for upcoming AI Act requirements.”
David Csiki, Managing Director, INDATA
AI outputs are not consistent, and auditors have noticed
Here is the practical problem nobody talks about enough: run the same prompt through the same LLM twice, and you can get two different answers. Research tracking output drift across financial tasks found variation of 25 to 75 percent on retrieval-augmented tasks. That is a compliance nightmare when auditors expect repeatable, explainable results.
In July 2025, the Massachusetts Attorney General hit Earnest Operations with a $2.5 million settlement after its AI underwriting model used college default rates as a variable, effectively penalising applicants from historically Black colleges. The AI never asked about race. It didn’t need to. Zip codes, employment history, and institutional data did the work.
Most compliance officers were never trained to reverse-engineer how a model weighted its inputs. The emerging role of “Ethical AI Compliance Officer” tries to bridge that gap, combining legal knowledge with technical understanding. But the talent pool is thin, and two-thirds of corporate directors still report limited to no knowledge of AI.
Several firms are responding by pulling AI back from high-stakes decisions and restricting it to lower-risk tasks like meeting notes, document summaries, and internal search. It’s not a failure. It’s a rational risk management call while governance catches up. Tuesay Singh at Deloitte described exactly this pattern playing out across multiple banking clients.
“I work at Deloitte Consulting and focus on financial services and banking clients. There is strong interest from clients to use AI to increase efficiency, reduce work hours, or improve throughput. A common pattern I see is that a client dev team builds a prototype using their preferred LLM (e.g. Copilot, Claude, or similar) and initial results look promising. But outputs are not deterministic. When they run the same prompt a week later, or have a different team member run it, you see a different result. When client team was called for a monthly audit review, it became difficult to justify the accuracy of AI-generated outputs to the internal auditor. In one case, the AI generated references to regulations that did not exist neither explainable nor acceptable in a regulated environment.
So we rolled the AI use case back to standard operations (e.g. meeting notes, Jira tickets, brainstorming, and web scraping) where receiving probabilistic answers from AI was not a reputation risk. Anything beyond that lacked the evidentiary backing to withstand a compliance review. A few examples:
n LLM can avoid protected characteristics as direct inputs and still learn discriminatory patterns from features like zip codes or employment history. This creates additional work for the compliance officers, who have to validate input, reverse engineer/ asess how the model inferred and weighted the data. Most of the compliance officers were not trained for this, nor did they have best practices to fall back on.
In 2nd scenario, where one of other my client operates in Europe and must prepare for the EU AI Act’s high-risk provisions, effective August 2026. Credit scoring, fraud detection, and investment decisioning all fall under mandatory lifecycle risk management, tamper-resistant logging, and technical documentation requirements. Meanwhile, U.S. counterparts face a fragmented state-level landscape (e.g. Colorado’s SB 24-205) requires disclosure of how AI lending decisions are made, effective February 2026. When we look into the speed of market dynamics and slowness of ingrained regulations, we have to help our clients find what I call a ‘bridge’ solution to meet regulatory requirements.”
Tuesay Singh, Product Lead, Deloitte Consulting
The explainability gap is where the real risk lives
Regulators are no longer satisfied with knowing what the AI decided. They want to know why. FINRA’s 2026 oversight report flags AI systems that operate beyond their intended scope and decision-making processes that are difficult to audit as active risks. The EU AI Act requires firms to be able to reconstruct any AI decision months after it happened, with full visibility into model version, data lineage, and confidence scores.
The industry is responding. FINOS, backed by Capital One, Citi, Goldman Sachs, JPMorgan Chase, and Morgan Stanley, released version 2.0 of its open-source AI Governance Framework addressing 30-plus risks with specific controls for agentic AI. Firms are shifting from static compliance reports to live, versioned audit trails. But only 28 percent of organisations using AI currently have a centralised system to track model changes, versioning, and decision-making.
The technology is ready. The governance is not. And that gap is where enforcement actions will land.
“The biggest headache for fintechs right now is what I call the explainability gap. It’s not enough to just show the results anymore. Regulators are moving past simple outcome monitoring; they want a granular look at exactly why the AI made a specific call. The real nightmare is proxy discrimination. An AI might find variables that seem neutral on the surface but actually correlate with protected classes. That creates a black box bias that’s incredibly hard to defend when you’re sitting through a fair lending audit.
Documenting these decisions has also shifted completely. We’ve moved away from static reports to live, versioned audit trails. If you look at the EU AI Act, using AI for creditworthiness is explicitly labeled high-risk. That triggers a massive need for rigorous data governance and human oversight. We’re seeing firms pivot toward automated logging that captures everything–the exact model version, the data lineage, and the confidence scores for every single transaction. You need to be able to reconstruct a decision months after it happened.
Navigating this requires a total mindset shift. You aren’t just building a smart tool; you’re building a defensible process. The reality is that the technology is usually ready long before the governance framework is. That gap is where the real enterprise risk lives for most fintech operators. If the tech outpaces your ability to explain it, you’re in trouble.”
Kuldeep Kundal, Founder & CEO, CISIN
What comes next
The compliance landscape for AI in financial services will only get more complex through 2026. The EU’s high-risk provisions take full effect in August. US states continue passing conflicting laws while federal preemption remains uncertain. Texas, California, and Illinois all have new AI-related requirements already in force. Regulators everywhere are moving from guidance to enforcement.
The RegTech market supporting these compliance needs is projected to grow from $16 billion in 2025 to $62 billion by 2032. That tells you everything about the scale of the problem.
The firms getting it right are doing three things. They are treating governance as a prerequisite, not an afterthought. They are building documentation systems that can reconstruct any decision on demand. And they are matching AI use cases to the level of oversight each one requires, keeping high-stakes decisions under tight human supervision while using AI freely for lower-risk tasks.
The message from every expert we spoke with was the same: if your technology has outpaced your ability to explain it to a regulator, you have a problem that needs fixing now, not later.
