Governance and AI Act: How to articulate legal constraints, effective governance, and innovation?

The AI Act, which came into effect in August 2024, is not limited to technical or legal requirements: it imposes a real transformation of governance on organisations. Artificial intelligence systems classified as high-risk will need to be governed by supervision, documentation, and data quality mechanisms. 

Or, Compliance cannot rest on a single department. Legal, IT, data, business units, and innovation departments must collaborate to build transversal governance. It is on this condition that the AI Act can be effectively respected and without hindering innovation. 

Pourquoi la gouvernance transversale est-elle indispensable ?

The CNIL reminds us that the regulation does not create an isolated framework, but rather is aligned with existing texts such as the GDPR. This means that the processing of personal data carried out within AI systems remains subject to the GDPR, in addition to the new obligations of the AI Act. An AI Act impact assessment may also be coordinated with the impact assessment provided for by the GDPR. 

If governance is fragmented, the risk is twofold : a legal reading without technical vision, or conversely, a technical implementation without consideration of fundamental rights. In both cases, the organisation exposes itself to Regulatory blind spots. 

AI Act strategy

The limitations of siloed approaches

In many companies, AI initiatives originate from innovation or the IT department, while compliance is managed by legal and risk managers. This siloed approach quickly shows its limitations: 

  • The Lawyers do not always master the technical logic of the models. 
  • The data scientists do not necessarily know the regulatory scope of a processing operation. 
  • The Trades struggle to integrate regulation into their concrete use cases. 

 

Result: friction, delays, and sometimes suspended projects. The AI Act requires a rethink of this organisation to prevent compliance from becoming a hindrance. 

Are you concerned your AI projects will be slowed down by organisational silos?

The pillars of effective AI Act governance

Successful governance rests on four complementary pillars: 

  1. Legal and risk managers They interpret the regulations, define obligations, assess non-compliance risks and ensure documentary compliance. 
  2. The IT department and the CDOs They guarantee data quality and traceability, oversee technical integration, and document systems. 
  3. Careers and HR they specify use cases, identify impacts on end-users, and integrate regulations into daily processes. 
  4. Innovation she ensures that compliance does not block the adoption of new solutions and allows opportunities to be explored by integrating constraints from the outset. 

Possible governance models

The AI Act does not impose a one-size-fits-all model. Three main approaches are conceivable: 
Governance model Description Advantage Limit
Centralised A dedicated team manages compliance for all AI projects. Coherence. Risk of heaviness.
Federated Each business entity or subsidiary has its AI compliance representatives, with central coordination. Proximity of use. Risk of heterogeneity.
Integrated DevOps / MLOps The AI Act requirements are integrated directly into the model development and deployment processes. Operational efficiency. Technical maturity required.
The choice will depend on the size of the organisation, its culture, and the importance of AI in its activities.  Do you want to know which governance model is suitable for your organisation?

The benefits of clear governance

Beyond compliance, well-thought-out governance brings tangible benefits: 

  • Cost reduction Fewer corrective audits or re-audits. 
  • Time-saving : Smoother and better documented processes. 
  • Adoption by trades End users better understand the constraints and take ownership of the tools. 
  • Increased confidence regulators and customers are reassured by the transparency and robustness of the processes. 

 

As the ACPR recalls, the AI Act should not be seen merely as a compliance text, but as an opportunity to strengthen the sector's resilience. 

How to initiate this governance from today?

Anticipating implementation is essential, as obligations for high-risk systems will take effect as early as August 2026. Organisations can, from now on: 

  1. Create a cross-functional committee bringing together legal, IT, data, and business units. 
  2. Define clear roles and responsibilities, along with validation pathways. 
  3. Implement steering tools (dashboards, AI registers, impact analysis monitoring). 
  4. Train teams to make them aware of the AI Act obligations. 

 

The AI Act introduces a new way of working, where compliance becomes everyone's responsibility. By articulating legal, data, and business functions, organisations are not just meeting regulatory requirements: they are equipping themselves with a robust framework for developing trustworthy and sustainable AI. 

Building good governance today is about transforming compliance into a lever for innovation.

FAQ – Governance and AI Act

Why is governance essential in the AI Act?
Because compliance isn't about one single direction. The regulation imposes technical, legal and organisational obligations. Only cross-functional governance can cover all of these dimensions. 

The following roles should be involved in governance:
Lawyers and risk managers, IT and CDOs, business functions and HR, as well as innovation teams. Each brings indispensable expertise. 

Which governance model to choose?
This depends on the size and culture of the organisation. The three main options are: centralised, federated, or integrated into DevOps/MLOps. 

MORE RESOURCES