Author: Charitarth Sindhu, Fractional Business & AI Workflow Consultant
The EU AI Act’s August 2, 2026 deadline is closing fast. For fintech companies using AI in credit scoring, automated lending, or insurance underwriting, this is not a distant regulatory event. It is a hard compliance wall with fines reaching €35 million or 7% of global turnover.
We asked founders, CTOs, and tech leaders across the industry one simple question: What is one step fintech companies should take now to prepare for the EU AI Act’s August 2026 deadline?
Their answers point to a clear consensus. Start with visibility. You cannot comply with rules you do not understand, and you cannot classify risk for systems you do not know exist.
Why Fintech Is in the Crosshairs
The EU AI Act sorts AI systems into four risk tiers: prohibited, high-risk, limited, and minimal. Fintech draws the short straw here.
Credit scoring, creditworthiness assessment, and automated lending decisions are all explicitly listed as high-risk in the Act’s Annex III [https://artificialintelligenceact.eu/annex/3/]. So is AI used in life and health insurance underwriting. These are not edge cases or grey areas. They are named directly in the legislation.
However, there is one major carve-out that many early analyses miss. Fraud detection is explicitly excluded from the high-risk tier. This distinction matters for compliance budgeting. Companies should not waste high-risk resources on systems the Act deliberately leaves out.
For everything that does qualify as high-risk, the obligations are significant. Companies need risk management systems, data governance frameworks, human oversight mechanisms, technical documentation, conformity assessments, post-market monitoring, and registration in the EU database. All of this must be operational by August 2, 2026.
Step One: Know What You Have
Every expert we spoke to circled back to the same starting point. Before you can classify risk, fix gaps, or build compliance frameworks, you need a complete picture of every AI system in your stack.
“Start an AI inventory and classification now: list every model and automated decision in your product (including vendor tools), map the data feeding it, and label the likely EU AI Act risk tier. Once you can see it clearly, you can prioritize what needs deeper controls–logging, human oversight, testing for bias, documentation–before August 2026, instead of scrambling in the dark.”
- Julia Pukhalskaia, CEO, Mermaid Way
This is not a theoretical exercise. Over half of organisations still lack even a basic AI inventory, according to recent compliance research. An appliedAI study of 106 enterprise AI systems found 40% had unclear classification, meaning they could not be definitively placed in a risk tier. You cannot build compliance on that kind of foundation.
“Start building and maintaining an ‘AI inventory’ now: a single register of every AI use case (including third-party APIs) tied to its purpose, training/inference data sources, model/version, where outputs go, and who owns it internally. In practice, this is the foundation for classification under the EU AI Act, plus the documentation and controls you’ll need later (monitoring, incident handling, and audit trails).
We’ve seen this work best when it’s implemented as a lightweight, enforceable engineering workflow: require an entry in the inventory as part of the SDLC (e.g., a CI/TeamCity gate for .NET Core services and Angular apps) and log key inference events to SQL with versioned configs. It’s a boring step, but it prevents the scramble in 2026 when you’re asked to prove what models you run, on what data, and with what controls.”
- Igor Golovko, Developer and Founder, TwinCore
Golovko’s point about embedding compliance into engineering workflows is worth sitting with. Treating AI governance as a periodic audit will not scale. Building it into CI/CD pipelines, with automated gates that prevent unregistered systems from deploying, turns compliance from a scramble into a process.
Follow the Data
Once you know what AI systems you have, the next step is understanding how data flows through them. Article 10 of the Act imposes strict requirements on training, validation, and testing datasets. They must be relevant, representative, and as free of errors as possible. Companies must document data origins, preparation steps, bias assessments, and assumptions about what the data represents.
“We recommend mapping the full journey of data from collection to model output. Document the consent basis, retention windows, sensitive attribute handling, and any enrichment from third parties. This step helps ensure that data is tracked clearly throughout the entire process. Implementing a lightweight control will prevent new data sources from entering the pipeline without review.
Taking these steps not only supports compliance but also improves model performance monitoring. It reduces the rework needed when regulators ask how a decision was made. Clear data lineage allows you to respond quickly if a dataset needs to be corrected or removed. Understanding this flow ensures that data rights and labeling issues do not lead to failures.”
- Christopher Pappas, Founder, eLearning Industry Inc
This data lineage work is not just about ticking a regulatory box. When a regulator asks how a credit decision was made, or when a dataset needs to be corrected or removed, companies with clear data maps can respond in days. Those without them face weeks of forensic reconstruction.
The Cost of Waiting
The compliance deadline is not the only pressure point. Companies that treat the EU AI Act as a checkbox item risk real financial damage, both from fines and from operational disruption.
“Over 15 years, I have experienced AI regulatory requirements delaying product launches, as was the case when a large payments provider’s launch in the EU was delayed by more than 10 million euros due to regulatory compliance being expedited last year.
The problem is fintech companies are treating the EU AI Act as a checkbox item and will not be compliant with the compliance deadline of August 2026 for high-risk applications (such as credit scoring and fraud detection). Failing to comply with the new EU AI Act means that companies could incur fines of up to 7% of their global revenue.
The time to act is now – conduct a full audit of your AI assets and associated risks.
Map your AI assets (models, data sets) and their associated use cases across the tiers in the AI Act (prohibited, high-risk).
Identify any gaps you may have in relation to the EU’s vetting checklist (ai-act.eu).
Prioritise corrective actions (per Deloitte’s 2025 Fintech Guide) based on the gaps identified.
Taking these actions will reduce your firm’s exposure to regulatory non-compliance risk by 40 to 60% and demonstrate to your investors that your firm is compliant and is the right partner for them to enter into the EU market. When I have performed these audits, they have reduced my client’s costs by more than 35% as a result of timely actions taken during my audits.”
- Dhari Alabdulhadi, CTO and Founder, Ubuy Qatar
Do Not Forget Your Vendor Stack
One blind spot that keeps coming up is third-party AI. Many fintech companies rely on vendor tools for payment routing, KYC checks, onboarding decisions, and risk scoring. If those tools use AI that makes decisions affecting whether someone gets approved, flagged, or paid, the compliance obligation does not sit with the vendor alone.
“Map every point where AI touches a financial decision in your product, especially around KYC, payment routing, and risk scoring, and classify each one against the EU AI Act’s risk tiers before you do anything else. At Remotify, we process cross-border payments for freelancers across 90+ countries, so we deal with automated compliance checks daily. The companies that will struggle most are the ones using third-party AI tools baked into their payment or onboarding stack without realizing those tools may qualify as high-risk under the Act. If a vendor’s AI is making decisions that affect whether someone gets paid or gets flagged, that is your compliance problem, not theirs. Audit your entire stack now, including every API and vendor tool that runs any form of automated decision-making, and document who is responsible for what before August 2026 forces the question.”
- Hasan Can Soygök, Founder, Remotify.co
Where to Start This Week
The consensus from these industry leaders is clear: visibility first, classification second, remediation third.
Companies that have not started an AI inventory should treat it as their single highest priority. Those that have one should validate it against the Act’s Annex III to confirm their risk classifications are accurate.
Several free tools can help. The European Commission’s AI Act Service Desk [https://ai-act-service-desk.ec.europa.eu/en] offers a compliance checker. The Future of Life Institute maintains a detailed compliance resource at artificialintelligenceact.eu [https://artificialintelligenceact.eu/assessment/eu-ai-act-compliance-checker/]. And ISO/IEC 42001 provides a certifiable AI management system framework that covers roughly 80% of what the Act requires.
The conformity assessment process alone typically takes six to twelve months. With less than six months remaining, the window for comfortable preparation has already closed. The window for adequate preparation is closing now.
