Close Menu
fintechbits
  • News
  • AI
  • Acquisitions
  • Trends
  • Insights
  • Rumors
  • Startups
  • finjobsly

Subscribe to Updates

Get the latest news from Fintechbits.

Trending Now

Your Bank Data Is Being Set Free. But Not Everywhere.

February 16, 2026

Fintechs Are Racing to Comply With AI Rules That Don’t Fully Exist Yet

February 16, 2026

Rephrased title from the customer challenge: Customer Issue

February 16, 2026

10 Swiss tech startups chosen for the Silicon Valley Roadshow in fintech news

February 16, 2026
Facebook X (Twitter) Instagram
Trending
  • Your Bank Data Is Being Set Free. But Not Everywhere.
  • Fintechs Are Racing to Comply With AI Rules That Don’t Fully Exist Yet
  • Rephrased title from the customer challenge: Customer Issue
  • 10 Swiss tech startups chosen for the Silicon Valley Roadshow in fintech news
  • Malaysians can now access their credit scores through Grab.
  • B2B buy now pay later is having a moment. Not the hype kind. The “this fixes a real operational headache” kind.
  • NextGen Finance AI Summit Kicks Off at Technopark in Thiruvananthapuram
  • Romance scams, chargebacks, and the trust problem fintech cannot ignore
Facebook X (Twitter) Instagram Pinterest Vimeo
fintechbits
  • News

    The emergence of licensing for banking services as a new trend in Fintech and its implications for the financial ecosystem

    February 11, 2026

    FinTech Magazine’s Latest Issue Highlights Klarna and Stripe Discussing the Future of Cryptocurrency

    February 10, 2026

    PB Fintech shares rise over 8% following significant news regarding its fundraising strategy.

    February 5, 2026

    CBN fintech investigation report suggests significant change in regulator’s position

    February 2, 2026

    Headlines from KUTV covering news, weather, sports, and breaking updates in Salt Lake City

    January 19, 2026
  • AI

    Rephrased title from the customer challenge: Customer Issue

    February 16, 2026

    NextGen Finance AI Summit Kicks Off at Technopark in Thiruvananthapuram

    February 14, 2026

    Evaluating Whether ChatGPT, Gemini, or Grok is the Best Option for Personal Finance Management

    February 14, 2026

    Singapore spearheads the implementation of AI in the financial services sector.

    February 13, 2026

    Agentic AI Enhances Financial Returns in Accounts Payable Automation

    February 13, 2026
  • Acquisitions

    MrBeast’s Company Acquires Fintech App Targeting Gen Z

    February 10, 2026

    Capital One’s $5 billion purchase of fintech Brex may prove to be another brilliant move by billionaire Richard Fairbank.

    January 24, 2026

    Fintech Partnership Enhances UST’s Digital Banking Goals

    January 20, 2026

    CoinGecko is reportedly exploring a sale valued at $500 million.

    January 16, 2026

    Flutterwave acquires Nigerian Mono in a unique exit for African fintech.

    January 6, 2026
  • Trends

    Your Bank Data Is Being Set Free. But Not Everywhere.

    February 16, 2026

    Fintechs Are Racing to Comply With AI Rules That Don’t Fully Exist Yet

    February 16, 2026

    BNPL, wallets, and the new last-minute Valentine checkout

    February 14, 2026

    Valentine’s spending is hitting records, and payment habits are changing fast

    February 13, 2026

    Why B2B buy-now-pay-later is outpacing consumer BNPL

    February 13, 2026
  • Insights

    Your Bank Data Is Being Set Free. But Not Everywhere.

    February 16, 2026

    Fintechs Are Racing to Comply With AI Rules That Don’t Fully Exist Yet

    February 16, 2026

    B2B buy now pay later is having a moment. Not the hype kind. The “this fixes a real operational headache” kind.

    February 15, 2026

    Romance scams, chargebacks, and the trust problem fintech cannot ignore

    February 14, 2026

    BNPL, wallets, and the new last-minute Valentine checkout

    February 14, 2026
  • Rumors

    Abivax CEO refers to Eli Lilly acquisition speculation as a diversion.

    February 8, 2026

    Big Tech’s AI Investment Competition; PB Fintech Halts QIP Initiative

    February 6, 2026

    SpaceX Considers Initial Public Offering, Spirit Airlines Owner Explores Private Equity, and Other Speculations

    January 25, 2026

    Collapse of Livestock Markets Amid Tumultuous Rumors

    January 23, 2026

    Crypto schools draw interest amid speculation regarding UAE initiatives.

    January 23, 2026
  • Startups

    10 Swiss tech startups chosen for the Silicon Valley Roadshow in fintech news

    February 16, 2026

    Malaysians can now access their credit scores through Grab.

    February 16, 2026

    Fundamentum and SMBC Asia invest in fintech startup Olyv.

    February 13, 2026

    Two fintech builders on what they wish they knew before building

    February 12, 2026

    SC appoints LC Wakaful Digital to run Malaysia’s inaugural social exchange platform

    February 12, 2026
  • finjobsly
fintechbits
Home » Fintechs Are Racing to Comply With AI Rules That Don’t Fully Exist Yet
Industry Trends

Fintechs Are Racing to Comply With AI Rules That Don’t Fully Exist Yet

9 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
Industry experts discuss AI compliance challenges facing fintech companies making automated financial decisions in 2026
AI-driven financial decisions sit at the centre of a fast-closing regulatory gap as the EU AI Act and US state laws take effect through 2026.
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

AI is making lending decisions, flagging fraud, and managing investment portfolios. Regulators are catching up. And the fintechs caught in the middle are trying to build compliance frameworks for rules that keep shifting under their feet.

We asked industry leaders what compliance challenges they’re seeing on the ground as AI takes on a bigger role in financial decision-making. Their answers paint a picture of an industry navigating real risk with incomplete guidance.

The data problem comes first

For investment firms, the starting point is straightforward: where is client data going?

The EU AI Act classifies AI used for credit scoring as high-risk under Annex III. That means mandatory lifecycle risk management, tamper-resistant logging, and technical documentation. Those obligations kick in August 2026. In the US, Colorado’s SB 24-205 requires lenders to disclose how AI makes lending decisions, with enforcement beginning mid-2026. The SEC has made AI its top examination priority for 2026, specifically looking at whether firms protect client data when using third-party AI tools.

The fiduciary obligation hasn’t changed. The tools have. And the question David Csiki keeps coming back to is simple: can your firm prove that client data stays private when it touches an LLM? If the answer is no, or even “probably,” that’s a red flag.

Firms that use external AI APIs risk sending sensitive portfolio data, client identifiers, and trading signals to third-party servers. Only 19 percent of financial services firms currently use data loss prevention tools for generative AI. The rest are flying blind.

“FinTech’s face several compliance challenges when implementing AI-based approaches for financial decision making. With regards to investment management in general, the most important consideration for compliance professionals involves data security and data privacy. For investment management firms, this is fundamental to their fiduciary obligations for clients, namely safeguarding client data. The best practice in this area for investment firms is to apply their existing data security and data privacy standards to all AI-based approaches that are being considered or used within the investment firm. The key question to ask is whether sensitive client data is being exposed to AI-based tools, including large language models (LLMs). Every effort should be made to keep client data private and if this cannot be demonstrated by a given AI-based approach, investment firms should re-evaluate and look to vendor solutions that safeguard client data with demonstrable methods. Next is the area of governance which is key for compliance. AI-based approaches and tools should have a robust set of controls and governance layer built into their solutions. Investment firms should seek to understand how an AI-based approach works on a technical level. Key controls include being able to ‘shut down’ an AI tool on an ‘ad hoc’ basis (i.e. a ‘kill switch’) and be able to specify what a given AI tool can be used for. Use cases for investment firms involving AI include investment research, data analysis, data formatting and output (i.e. reporting) and agentic AI for administrative tasks related to investment management. Firms may seek to have limited application of AI or a comprehensive set of use cases depending on their fiduciary responsibilities to clients and overall risk tolerance. Another important consideration is interoperability of AI-based approaches. For example, different LLMs can be assessed based on the individual investment firm’s standard for risk. Based on that, a given LLM may not be fit-for-purpose for a given use case. Additionally, as AI tools like LLMs continue to evolve rapidly, firms need the ability to switch quickly and easily from one provider to another based on industry events and situations that raise the risk profile of a given LLM. By assessing AI through the framework of compliance, financial firms using fintech solutions involving AI will be able to successfully prepare themselves for upcoming AI Act requirements.”

David Csiki, Managing Director, INDATA

AI outputs are not consistent, and auditors have noticed

Here is the practical problem nobody talks about enough: run the same prompt through the same LLM twice, and you can get two different answers. Research tracking output drift across financial tasks found variation of 25 to 75 percent on retrieval-augmented tasks. That is a compliance nightmare when auditors expect repeatable, explainable results.

In July 2025, the Massachusetts Attorney General hit Earnest Operations with a $2.5 million settlement after its AI underwriting model used college default rates as a variable, effectively penalising applicants from historically Black colleges. The AI never asked about race. It didn’t need to. Zip codes, employment history, and institutional data did the work.

Most compliance officers were never trained to reverse-engineer how a model weighted its inputs. The emerging role of “Ethical AI Compliance Officer” tries to bridge that gap, combining legal knowledge with technical understanding. But the talent pool is thin, and two-thirds of corporate directors still report limited to no knowledge of AI.

Several firms are responding by pulling AI back from high-stakes decisions and restricting it to lower-risk tasks like meeting notes, document summaries, and internal search. It’s not a failure. It’s a rational risk management call while governance catches up. Tuesay Singh at Deloitte described exactly this pattern playing out across multiple banking clients.

“I work at Deloitte Consulting and focus on financial services and banking clients. There is strong interest from clients to use AI to increase efficiency, reduce work hours, or improve throughput. A common pattern I see is that a client dev team builds a prototype using their preferred LLM (e.g. Copilot, Claude, or similar) and initial results look promising. But outputs are not deterministic. When they run the same prompt a week later, or have a different team member run it, you see a different result. When client team was called for a monthly audit review, it became difficult to justify the accuracy of AI-generated outputs to the internal auditor. In one case, the AI generated references to regulations that did not exist neither explainable nor acceptable in a regulated environment.

So we rolled the AI use case back to standard operations (e.g. meeting notes, Jira tickets, brainstorming, and web scraping) where receiving probabilistic answers from AI was not a reputation risk. Anything beyond that lacked the evidentiary backing to withstand a compliance review. A few examples:

n LLM can avoid protected characteristics as direct inputs and still learn discriminatory patterns from features like zip codes or employment history. This creates additional work for the compliance officers, who have to validate input, reverse engineer/ asess how the model inferred and weighted the data. Most of the compliance officers were not trained for this, nor did they have best practices to fall back on.

In 2nd scenario, where one of other my client operates in Europe and must prepare for the EU AI Act’s high-risk provisions, effective August 2026. Credit scoring, fraud detection, and investment decisioning all fall under mandatory lifecycle risk management, tamper-resistant logging, and technical documentation requirements. Meanwhile, U.S. counterparts face a fragmented state-level landscape (e.g. Colorado’s SB 24-205) requires disclosure of how AI lending decisions are made, effective February 2026. When we look into the speed of market dynamics and slowness of ingrained regulations, we have to help our clients find what I call a ‘bridge’ solution to meet regulatory requirements.”

Tuesay Singh, Product Lead, Deloitte Consulting

The explainability gap is where the real risk lives

Regulators are no longer satisfied with knowing what the AI decided. They want to know why. FINRA’s 2026 oversight report flags AI systems that operate beyond their intended scope and decision-making processes that are difficult to audit as active risks. The EU AI Act requires firms to be able to reconstruct any AI decision months after it happened, with full visibility into model version, data lineage, and confidence scores.

The industry is responding. FINOS, backed by Capital One, Citi, Goldman Sachs, JPMorgan Chase, and Morgan Stanley, released version 2.0 of its open-source AI Governance Framework addressing 30-plus risks with specific controls for agentic AI. Firms are shifting from static compliance reports to live, versioned audit trails. But only 28 percent of organisations using AI currently have a centralised system to track model changes, versioning, and decision-making.

The technology is ready. The governance is not. And that gap is where enforcement actions will land.

“The biggest headache for fintechs right now is what I call the explainability gap. It’s not enough to just show the results anymore. Regulators are moving past simple outcome monitoring; they want a granular look at exactly why the AI made a specific call. The real nightmare is proxy discrimination. An AI might find variables that seem neutral on the surface but actually correlate with protected classes. That creates a black box bias that’s incredibly hard to defend when you’re sitting through a fair lending audit.

Documenting these decisions has also shifted completely. We’ve moved away from static reports to live, versioned audit trails. If you look at the EU AI Act, using AI for creditworthiness is explicitly labeled high-risk. That triggers a massive need for rigorous data governance and human oversight. We’re seeing firms pivot toward automated logging that captures everything–the exact model version, the data lineage, and the confidence scores for every single transaction. You need to be able to reconstruct a decision months after it happened.

Navigating this requires a total mindset shift. You aren’t just building a smart tool; you’re building a defensible process. The reality is that the technology is usually ready long before the governance framework is. That gap is where the real enterprise risk lives for most fintech operators. If the tech outpaces your ability to explain it, you’re in trouble.”

Kuldeep Kundal, Founder & CEO, CISIN

What comes next

The compliance landscape for AI in financial services will only get more complex through 2026. The EU’s high-risk provisions take full effect in August. US states continue passing conflicting laws while federal preemption remains uncertain. Texas, California, and Illinois all have new AI-related requirements already in force. Regulators everywhere are moving from guidance to enforcement.

The RegTech market supporting these compliance needs is projected to grow from $16 billion in 2025 to $62 billion by 2032. That tells you everything about the scale of the problem.

The firms getting it right are doing three things. They are treating governance as a prerequisite, not an afterthought. They are building documentation systems that can reconstruct any decision on demand. And they are matching AI use cases to the level of oversight each one requires, keeping high-stakes decisions under tight human supervision while using AI freely for lower-risk tasks.

The message from every expert we spoke with was the same: if your technology has outpaced your ability to explain it to a regulator, you have a problem that needs fixing now, not later.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Your Bank Data Is Being Set Free. But Not Everywhere.

February 16, 2026

B2B buy now pay later is having a moment. Not the hype kind. The “this fixes a real operational headache” kind.

February 15, 2026

Romance scams, chargebacks, and the trust problem fintech cannot ignore

February 14, 2026
Leave A Reply Cancel Reply

Latest news

Your Bank Data Is Being Set Free. But Not Everywhere.

February 16, 2026

Fintechs Are Racing to Comply With AI Rules That Don’t Fully Exist Yet

February 16, 2026

Rephrased title from the customer challenge: Customer Issue

February 16, 2026
News
  • AI in Finance (2,125)
  • Breaking News (191)
  • Corporate Acquisitions (80)
  • Industry Trends (236)
  • Jobs Market News (334)
  • Market Insights (241)
  • Market Rumors (304)
  • Regulatory Updates (201)
  • Startup News (1,332)
  • Technology Innovations (204)
  • uncategorized (5)
  • X Feed (1)
About US
About US

FintechBits is a blog delivering the latest news and insights in fintech, finance, and technology. We cover breaking news, market trends, innovations, and expert opinions to keep you informed about the future of finance

Facebook X (Twitter) Instagram Pinterest Reddit TikTok
News
  • AI in Finance (2,125)
  • Breaking News (191)
  • Corporate Acquisitions (80)
  • Industry Trends (236)
  • Jobs Market News (334)
  • Market Insights (241)
  • Market Rumors (304)
  • Regulatory Updates (201)
  • Startup News (1,332)
  • Technology Innovations (204)
  • uncategorized (5)
  • X Feed (1)
Happening Now

November 28, 2024

“ Intentionally collaborative ”: how the Rotman school of U of T leads Innovation Fintech

February 6, 2025

‘1957 Ventures’ to Drive FinTech Innovation in Saudi Arabia

September 10, 2024
  • About FintechBits
  • Advertise With us
  • Contact us
  • Disclaimer
  • Privacy Policy
  • Terms and services
  • BUY OUR EBOOK GUIDE
© 2026 Designed by Fintechbits

Type above and press Enter to search. Press Esc to cancel.