11/7/2025

Su Cizem

CeSIA Welcomes the EU's GPAI Code of Practice and urges AI companies to sign on

CeSIA welcomes today’s publication of the final Code of Practice for General-Purpose AI as a major first step toward implementing the EU AI Act's requirements for the most advanced AI systems. However, CeSIA deplores several significant shortcomings in the final text. 

"While the final text was weakened by last-minute industry lobbying, it still provides a solid foundation. Now that companies have obtained the concessions they wanted, they have no excuse not to sign and fully uphold the Code. The AI Office, meanwhile, must oversee it with firmness. The focus must finally shift from negotiation to execution."
   - Charbel-Raphaël Ségerie, Executive Director, CeSIA

What we got and what was lost

CeSIA, along with other civil society organizations focused on AI safety, participated both through the formal feedback process and through additional joint letters calling for greater formal requirements and transparency in the evaluation of models prior to their deployment. Several of these recommendations were heard and integrated into the final version of the Code. In particular, a summary of the model evaluation documentation must be systematically made public, and external evaluations are technically required for AI systems with systemic risks. However, providers can simply self-certify that the risks are "similar" to those of existing models to bypass this obligation, or even claim that no qualified third-party evaluator is available.

The Code also suffers from significant weaknesses:

  • No pre-deployment documentation is required. Risk assessments and model reports are only required after models are placed on the market. This removes any opportunity for regulators to assess risks beforehand.
  • Emergency preparedness is no longer required. While earlier drafts included clearer guidance on pre-defined incident response plans, the final version makes this guidance optional.
  • Whistleblower protections were gutted. The Code now contains only a single sentence banning retaliation. Earlier drafts envisioned alignment with EU whistleblower law and secure reporting channels; these are now gone.

Overall, model developers retain control over systemic risk definitions, evaluator selection, red-teaming processes, and post-deployment monitoring. Many measures are phrased as general principles, without a precise framework or specific implementation requirements. 

From negotiation to enforcement


While the process integrated civil society organizations in its early phases, the final weeks of negotiation saw a clear shift: additional closed-door workshops were convened with major AI companies, and we believe several safety and accountability provisions were softened or removed as a result.

Even if imperfect, this Code is an essential starting signal. As Europe prepares to move from negotiation to the application of the AI Act for general-purpose models, it is now up to institutions and companies to ensure it delivers on its promises.

Sign up for our newsletter