3/7/2025

Arthur Grimonpont

CeSIA urges European Commission to stay the course on AI Act: "The uncertainty is technological, not regulatory"

CeSIA has joined a coalition of international organisations and experts in an open letter to Ursula von der Leyen, urging the Commission to resist pressure from the tech industry to weaken the EU's AI legislation. The signatories are calling on the European executive to strengthen safety measures for general-purpose AI models and to increase the resources of the European AI Office.

CeSIA is adding its voice to that of around 15 civil society organisations and 30 international experts, including Nobel laureates Daron Acemoglu and Geoffrey Hinton, to call on the European Commission to prioritise citizen safety over the commercial interests of tech companies.

Tech industry lobbyists complain of so-called 'regulatory uncertainty' in an attempt to weaken the EU's AI Act. But the primary source of uncertainty is clearly not regulatory: it's technological. Advanced AI systems, subject to fewer technical standards than a toaster, are currently being deployed to hundreds of millions of users without any external oversight. By correcting this dangerous situation and harmonising rules across Member States, the AI Act and its Code of Practice offer not only regulatory certainty but, above all, the urgent and essential protection European citizens need.

Charbel Raphaël Segerie, Executive Director of the Centre for AI Safety (CeSIA).

The open letter puts forward three main requests to the European Commission:

  1. Require independent evaluation for general-purpose AI models to ensure they are safe before being placed on the market.
  2. Allow for regular reviews of the Code of Practice to ensure it keeps pace with the rapid evolution of the technology and its associated risks.
  3. Double the staff of the AI Office and ensure it has the necessary financial resources and technical expertise to enforce the regulation effectively.

The publication of this open letter comes amid intense pressure fromseveral tech industry lobbies, some of which have publicly called forthe regulation's application to be suspended by invoking a  "stop-the-clock" clause, arguing that the implementation guidelines were not ready.

Yet, the exponential progress of AI is accompanied by growing risks.Companies developing cutting-edge AI, such as OpenAI, Anthropic, and Google, have themselves acknowledged that their new generations of models pose increasing threats, particularly concerning critical chemical, biological, radiological, and nuclear (CBRN) risks.

Read the open letterRead the open letter
Sign up for our newsletter