The AI Act: Understanding the European regulation on AI

As of 2 February 2025, the AI Act has officially come into force, marking a key milestone in the regulation of artificial intelligence technologies in Europe. This regulation introduces a classification of AI systems based on their risk level and imposes strict requirements on developers and deployers, particularly for systems considered high-risk.

From now on, companies must anticipate these new constraints to ensure their systems are compliant. The AI Act is not limited to restricting certain uses; it also imposes requirements regarding transparency, data governance, and cybersecurity.

A classification of AI systems by risk level

The AI Act establishes a risk hierarchy across four distinct levels:

  • AI systems posing an unacceptable risk are completely banned. These include social scoring solutions, emotion recognition in professional or educational settings, predictive policing, and large-scale facial recognition databases. Certain exceptions exist, notably for the use of real-time biometric surveillance for judicial purposes.

  • High-risk AI systems, which concern sensitive areas, remain authorised but are subject to strict obligations. This category includes systems used in biometrics, critical infrastructure, education, employment, public and financial services, border control, and the administration of justice. These systems must meet specific criteria regarding risk management, human oversight, and technical robustness.

  • Other systems are considered low-risk and must comply with transparency obligations.. This concerns tools like chatbots, for which users must be informed that they are interacting with AI, as well as deepfakes, which must be clearly identified as artificial. The use of biometric data must also be subject to user consent, except in certain cases provided for by law.

  • Finally, AI systems presenting a minimal risk are not subject to specific regulatory constraints.. This is notably the case for video games incorporating AI or anti-spam filters, which are considered safe for users' fundamental rights and security.
IA Act Europe

The stakeholders concerned by the AI Act

The obligations imposed by the AI Act primarily concern providers of high-risk AI systems, whether established in the European Union or abroad, provided their solution is used in the European market. These providers must implement risk governance covering the entire lifecycle of their systems, ensure the traceability of data used for training, and design mechanisms guaranteeing effective human oversight of decisions made by AI. They must also ensure the robustness, cybersecurity, and accuracy of the results generated by their systems.

To comply with these requirements, suppliers must implement several measures detailed in articles 8 to 17 of the regulatory text (art. 8 – 17) :

  • Mettre en place un Risk management system throughout the lifecycle of the high-risk AIS; ;
  • Mean Data governance
  • Establish technical documentation
  • To design their high-risk AI system for Record keeping to enable it to automatically record relevant events for identifying national-level risks and substantial changes throughout the system's lifecycle.
  • Provide Instructions for use to downstream deployers to enable their compliance.
  • To design their high-risk AI system to allow deployers to implement human surveillance.
  • Design their high-risk AI system to achieve appropriate levels of accuracy, The Robustness and cybersecurity.
  • Mettre en place un Quality management system to ensure compliance with the rules.

 

Companies using these high-risk solutions also have obligations.. They must follow suppliers' recommendations to ensure compliant use and monitor AI-led decisions. They are also responsible for verifying the relevance of data provided to the system and immediately reporting any issues to suppliers and competent authorities. These requirements apply to both companies established in the European Union and those located outside the EU, provided that the AI's outcomes are used within European territory.

General-purpose AI under scrutiny

General AI models, such as those used in large generative AI platforms, are also within the scope of the AI Act. All providers of these models must provide detailed technical documentation, ensure compliance with copyright rules, and publish a summary of the data used for training.

When these models present a Systemic risk, additional obligations apply. They must be subjected to robustness assessments and testing to identify potential vulnerabilities. Serious incidents must be reported to the competent authorities and enhanced cybersecurity protocols must be put in place. To ensure the proper application of these measures, an AI Office will be established within the European Commission to oversee Adherence to rules and compliance stakeholders.

What impacts and how to anticipate these changes?

The entry into force of the AI Act represents a major turning point for companies developing or operating artificial intelligence systems. This regulatory framework requires rapid adaptation to new requirements regarding data governance, traceability, and system monitoring.

At JEMS, we also implement solutions to ensure data traceability and guarantee model monitoring throughout their lifecycle. Thanks to our expertise in MLOps and data governance, we offer Solutions for industrialising and securing the use of artificial intelligence within a strict regulatory framework.

The AI Act should not be seen as a constraint, but as a opportunity to improve the transparency and robustness of artificial intelligence systems. Companies that anticipate these changes will have a competitive advantage by integrating responsible practices compliant with new requirements today.

Contact our experts to structure your compliance and take advantage of this regulatory evolution.

MORE RESOURCES