Home » The AI Act: Understanding the European regulation on AI
As of 2 February 2025, the AI Act has officially come into force, marking a key milestone in the regulation of artificial intelligence technologies in Europe. This regulation introduces a classification of AI systems based on their risk level and imposes strict requirements on developers and deployers, particularly for systems considered high-risk.
From now on, companies must anticipate these new constraints to ensure their systems are compliant. The AI Act is not limited to restricting certain uses; it also imposes requirements regarding transparency, data governance, and cybersecurity.
The AI Act establishes a risk hierarchy across four distinct levels:
The obligations imposed by the AI Act primarily concern providers of high-risk AI systems, whether established in the European Union or abroad, provided their solution is used in the European market. These providers must implement risk governance covering the entire lifecycle of their systems, ensure the traceability of data used for training, and design mechanisms guaranteeing effective human oversight of decisions made by AI. They must also ensure the robustness, cybersecurity, and accuracy of the results generated by their systems.
To comply with these requirements, suppliers must implement several measures detailed in articles 8 to 17 of the regulatory text (art. 8 – 17) :
Companies using these high-risk solutions also have obligations.. They must follow suppliers' recommendations to ensure compliant use and monitor AI-led decisions. They are also responsible for verifying the relevance of data provided to the system and immediately reporting any issues to suppliers and competent authorities. These requirements apply to both companies established in the European Union and those located outside the EU, provided that the AI's outcomes are used within European territory.
General AI models, such as those used in large generative AI platforms, are also within the scope of the AI Act. All providers of these models must provide detailed technical documentation, ensure compliance with copyright rules, and publish a summary of the data used for training.
When these models present a Systemic risk, additional obligations apply. They must be subjected to robustness assessments and testing to identify potential vulnerabilities. Serious incidents must be reported to the competent authorities and enhanced cybersecurity protocols must be put in place. To ensure the proper application of these measures, an AI Office will be established within the European Commission to oversee Adherence to rules and compliance stakeholders.
The entry into force of the AI Act represents a major turning point for companies developing or operating artificial intelligence systems. This regulatory framework requires rapid adaptation to new requirements regarding data governance, traceability, and system monitoring.
At JEMS, we also implement solutions to ensure data traceability and guarantee model monitoring throughout their lifecycle. Thanks to our expertise in MLOps and data governance, we offer Solutions for industrialising and securing the use of artificial intelligence within a strict regulatory framework.
The AI Act should not be seen as a constraint, but as a opportunity to improve the transparency and robustness of artificial intelligence systems. Companies that anticipate these changes will have a competitive advantage by integrating responsible practices compliant with new requirements today.
Contact our experts to structure your compliance and take advantage of this regulatory evolution.