The European Union's AI Act is an ambitious regulation designed to do what regulators often struggle with β getting ahead of the market and mitigating risks before they arise. Some argue it may stifle innovation, but it's a proactive move to promote responsible use of AI technologies.
Spoiler: β Requirements start as early as the development phase.
As a rule, the law connects the level of risk posed by an AI system to the obligations that will apply to it.
Here, we'll focus on high-risk AI systems in banking and payments. (In future posts, we'll also explore trading, investment, and other risk-tiered use cases.)
βΈ»
π§ When Is an AI System Considered High-Risk?
The AI Act defines high-risk systems in two main cases:
- Annex I: The AI system is embedded in a product that's already regulated in the EU (e.g., vehicles, medical devices).
- Annex III: The AI system is used in sensitive domains β including creditworthiness assessment or credit scoring π³ (Annex III, section 5(b)).
The law explicitly states:
"AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud."
βΈ»
π³ What Does This Mean for Fintech and Payment Companies?
Any AI model that influences loan approvals, credit limits, or payment method eligibility for individual users must comply with strict regulatory obligations.
Even automated recommendation or approval engines may fall under the scope if they play a role in critical decision-making.
βΈ»
π§Ύ Core Requirements for High-Risk AI Systems Include:
- β Transparency β Clear explanation of how decisions are made (statistical or business logic)
- β Fairness β Bias mitigation (e.g., gender, ethnicity, age)
- β Human Oversight β Ability to intervene or appeal decisions in real-time
- β Risk Assessment β Prior analysis before deployment
- β Data Quality β Use of diverse and high-quality data sources
- β Logging β Ongoing documentation for auditing purposes
- β Technical Documentation β Detailed description of how the system works and why
- β User Instructions β Clear guidance for implementers and end-users
- β Reliability and Security β High accuracy and robust cybersecurity measures
βΈ»
π΅οΈ What About Fraud Detection?
While fraud detection is one of the most common AI applications in financial services, it is not currently classified as high-risk under Annex III.
As clarified in Recital 58:
"AI systems for detecting financial fraud or calculating capital requirements are not considered high-risk."
Why? Because this area is already regulated under frameworks like PSD2 and AML.
π Note: This interpretation may evolve as AI use cases expand.
βΈ»
π When Does It Come into Effect?
- February 2, 2025 β General requirements (e.g., transparency, documentation, oversight) come into force
- August 2, 2026 β Enforcement begins for high-risk AI systems
(Important: This is a Regulation, not a Directive β meaning it applies automatically across all EU member states.)
βΈ»
π¬ Summary: Act Now
The EU AI Act sets a new standard β and it's already here.
If you're in fintech, banking, or payments, and your go-to-market strategy includes Europe:
- π Check whether your system qualifies as "high-risk"
- π Ensure proper documentation, oversight, and explainability
- π Prepare for ongoing monitoring and regulatory engagement
βΈ»
π Operating in financial services?
How are you preparing for the new AI rules?
I focus on the intersection of regulation and AI β feel free to share, reach out, or discuss your strategy.