AI Act and finance: what impacts on scoring and risk assessment?

The European regulation on artificial intelligence, or AI Act, Came into effect on 1 August 2024. This is the first legal framework aimed at regulating the use of AI based on a risk management approach. By classifying AI systems according to their level of risk: from unacceptable risk to high-risk uses, the European Union has chosen a proportionate yet ambitious approach. The AI Act particularly impacts finance, as AI uses directly affect credit, pricing, or fraud detection decisions, with major consequences for individuals and economic stability. 

For banks, insurance companies, and financial institutions, the AI Act marks a turning point. The practices of Credit scoring, and’Insurance risk assessment or else of Automated regulatory compliance management explicitly features on the list of high-risk systems defined by the regulation.  

The challenge is twofold: to prepare for new documentation, governance, and supervision obligations, while transforming this constraint into a lever of trust with clients. Do you want to know how the AI Act will impact your financial activities?

Why is the financial sector at the heart of the AI Act?

Finance and insurance are among the most regulated sectors in Europe, and the AI Act is a continuation of this. The regulation identifies as «high-risk» systems used for: 

  • creditworthiness assessment and the granting of credit, 
  • the pricing of life and health insurance, 
  • The management of financial services considered essential. 

 

These activities are not simply about technological convenience; they determine citizens' access to vital resources such as housing, healthcare, and pensions.  

Any error or bias in an AI model can have direct repercussions on the lives of those concerned. This is why European legislation imposes specific constraints on financial actors: quality data, human supervision, transparency, and technical robustness. 

The CNIL reminds us that the regulation does not create an isolated framework, but rather is aligned with existing texts such as the GDPR. This means that the processing of personal data carried out within AI systems remains subject to the GDPR, in addition to the new obligations of the AI Act. An AI Act impact assessment may also be coordinated with the impact assessment provided for by the GDPR. 

If governance is fragmented, the risk is twofold : a legal reading without technical vision, or conversely, a technical implementation without consideration of fundamental rights. In both cases, the organisation exposes itself to Regulatory blind spots. 

Legal obligations AI Act

Risk assessment in insurance

Insurance is also at the heart of the regulator's concerns. The use of AI to price health or life insurance contracts, or to detect risk of claims, is subject to the same transparency and robustness requirements. 

Companies must ensure that the data used does not create unjustified exclusions, that models are explained and understandable, and that policyholders have recourse in the event of automated decisions.  

Data governance plays a central role here: the AI Act requires that training datasets be representative, relevant, and documented. The aim is to avoid biases that could undermine policyholders' trust in the system. 

The AI Act's compliance obligations for financial sector players

The AI Act distinguishes obligations according to the role played in the value chain: suppliers of systems (AI developers or designers) and Deployers (banks, insurers, asset managers who integrate these systems into their processes). 

For suppliers, the main obligations include the design of a risk management system, the technical documentation, data and model quality assurance, the implementation of traceability and logging mechanisms. 

For deployers, obligations include, among others, conform use of the system, the preservation of certain newspapers, the carrying out, in certain cases, of impact assessments on fundamental rights, human oversight of decisions. 

In the financial sector, this means that institutions will have to precisely document their scoring or risk assessment practices, and be able to produce this information during an audit by the competent authorities, such as the ACPR in France. 

The risks of non-compliance

Failure to comply with the AI Act exposes financial companies to several types of risks: 

  • regulatory, with fines potentially reaching several percent of the turnover of the entity concerned, 
  • legal, with litigation linked to decisions perceived as discriminatory, 
  • reputational, in the event of bad press or loss of customer confidence, 
  • financiers, as the cost of late compliance can prove much higher than that of early preparation. 

 

As the ACPR reminds us, compliance with the European AI Regulation should not be considered a one-off burden, but rather a continuous process of risk management and system quality improvement. 

How to prepare from today?

For financial institutions, preparation cannot wait until the 2026 regulatory deadline. Three key areas appear to be priorities: 

  1. Mapping AI uses already present within the organisation, by identifying those that fall into the high-risk category. 
  2. Implement data governance Claire, bringing together legal experts, data scientists, business professionals and compliance managers. 
  3. Develop technical and organisational documentation which can be used as evidence in the event of an inspection. 

 

Businesses that anticipate these steps will have a competitive advantage: they will be able to demonstrate their seriousness to their clients, reduce the risk of sanctions and improve the robustness of their systems. Conversely, those that delay risk having to make costly and hasty upgrades. 

The financial sector is on the front line regarding the AI Act. While this regulation represents a significant effort, it also presents an opportunity: to strengthen customer trust, improve model quality, and reduce legal risks. 

The AI Act is not just a regulatory constraint: it is a lever for building a more transparent, fairer, and more resilient financial sector. 

FAQ – AI Act and the Financial Sector

When did the AI Act come into force and when does it apply?
The regulation entered into force on 1 August 2024. Bans on certain practices will apply from February 2025. Rules on general-purpose models and the establishment of national authorities are expected in August 2025. Core obligations for high-risk systems, including credit scoring and insurance underwriting, will come into effect in August 2026. 

Les pratiques financières interdites par l'IA Act comprennent : * **Systèmes d'IA qui manipulent le comportement des personnes** : Cela inclut les IA qui exploitent les vulnérabilités des personnes pour les pousser à prendre des décisions financières nuisibles ou qui utilisent des techniques d'influence subliminales. * **Systèmes d'IA qui réalisent des scores de crédit sociaux** : Sauf dans des cas très spécifiques et encadrés, l'IA Act interdit l'utilisation de l'IA pour évaluer la solvabilité des individus sur la base de leur comportement social ou de leur réputation générale. * **Systèmes d'IA qui collectent des données personnelles spécifiques sans consentement éclairé pour l'évaluation des risques de crédit** : Cela vise à protéger la vie privée et à garantir que les décisions de crédit ne sont pas prises sur la seule base de données personnelles collectées de manière inappropriée. * **Systèmes d'IA qui créent des « deepfakes » à des fins préjudiciables dans le secteur financier** : Par exemple, utiliser des intelligences artificielles pour générer de fausses informations ou de fausses représentations dans le but de tromper des investisseurs ou des clients. Il est important de noter que l'IA Act adopte une approche basée sur le risque, où les systèmes d'IA considérés comme inacceptables sont interdits. Les pratiques susmentionnées sont celles qui ont été classées dans cette catégorie la plus restrictive.
Among other things, the regulation prohibits social scoring systems, certain large-scale biometric surveillance practices, or emotional scoring. In finance, this primarily concerns cases where sensitive personal data would be misused. 

What is high-risk AI in finance?
Considered high-risk are: credit scoring, solvency assessment, pricing for life and health insurance, as well as certain uses relating to fraud detection or the control of essential services. These systems must meet strengthened obligations regarding transparency, supervision, and robustness. 

What are the penalties for non-compliance?
The penalties provided for can amount to several per cent of turnover, depending on the seriousness of the breaches. These are in addition to the reputational and legal risks for financial institutions that do not comply with the regulation. 

How can a bank or an insurance company prepare itself?
Preparation involves three key stages: mapping existing usage, establishing robust data governance, and developing technical and organisational documentation. Anticipating these steps now helps smooth costs and strengthens customer confidence. 

The JEMS PATH2AI COMPLIANCE Offer

Complying with the AI Act can seem complex, especially for critical sectors like finance. At JEMS, we have designed PATH2AI COMPLIANCE, A pragmatic approach that allows us to: 

  • Activate only the bricks you need, without a heavy process. 
  • Benefit from integrated legal, data, and AI expertise  
  • Achieve concrete deliverables from the early stages: usage mapping, maturity grid, prioritised roadmap. 
  • Build AI governance suited to your organisation, whether centralised, federated or integrated into your DevOps/MLOps practices. 

 

*Proposed building blocks: Audit, Training, Governance Plan, Regulatory Framework, Guidelines, AI Procedures, Controls & Supervision, Governance Structuring & Tools, Platform Governance Raci Risk (GRC), Compliance audit dedicated to generative AI…  

MORE RESOURCES