Our Mission

The Center for AI Safety (CeSIA) is a non-profit organization working to reduce risks associated with artificial intelligence. Our mission is to build a culture of AI safety in France and Europe by offering technical, policy, and regulatory solutions adapted to the major challenges posed by this technology.

Based in Paris, we focus our efforts on France and Europe, regions that play a decisive role in global discussions about the future of AI.

Our Strategy

Our strategy is built on four pillars:

  • Academic Training: We provide Europe's first university-level education on AI safety at ENS Ulm and Paris-Saclay University. We also run international ML4Good bootcamps in partnership with Erasmus+ and are developing a MOOC with ENS Paris-Saclay.
  • Research and Development: We publish the AI Safety Atlas, an online reference manual on AI safety. We're developing an evaluation tool (BELLS benchmark) to test the robustness of AI safety systems and contributing to the theoretical framing of warning systems (theory of "warning shots").
  • Institutional Advocacy: We collaborate with key institutions such as the EU and OECD, international governance initiatives (PMIA), French regulatory bodies (ANSSI, LNE), and technology companies (Mozilla, MistralAI) to promote the consideration of AI risks in regulation and public policy.
  • Public Awareness: We organize symposiums with world-renowned experts, publish opinion pieces, collaborate with YouTubers (reaching up to 4 million views), organize interactive risk demonstrations (AI capabilities and risks demo-jam), and facilitate the AI Safety Workshop.

Our Economic Model

We are a non-profit organization supported by European and British philanthropic organizations and individuals. We do not receive any funding from private AI companies to protect ourselves from conflicts of interest.

We have received generous grants from the following donors, whom we warmly thank:

  • Effektiv Spenden
  • AI Safety Tactical Opportunities Fund (AISTOF)
  • Longview Philanthropy
  • A newly created Swedish foundation
  • Survival and Flourishing Fund

Some ML4Good bootcamps have been funded by Erasmus+, an education program partly financed by the European Commission.

The AI Safety Atlas is primarily funded by Ryan Kidd, co-director of the Machine Learning Alignment & Theory Scholars (MATS) program at Berkeley University, through the Manifund platform.

Sign up for our newsletter