Arthur Grimonpont
.jpg)
On September 22, 2025, the Global Call for AI Red Lines was published. Initiated by the French Center for AI Safety (CeSia) and co-organized with The Future Society and the UC Berkeley Center for Human-Compatible AI, this call is due to the exceptional prominence and diversity of its signatories, its reception at the highest diplomatic levels, and its extensive media coverage.
The Global Call unites the voices of over 300 renowned figures from around the world and is supported by more than 90 organizations. Signatories are calling on states to conclude an international agreement by the end of 2026 to prohibit the uses and capabilities of AI that present risks to global security and human dignity.
Consult the appeal and see the signatories at: red-lines.ai
Distinguished personalities from all walks of life have slow their voices to the Global Call. Among them are:
Several prominent signatories have underscored the urgency of the situation:
It is in our vital common interest to prevent AI from inflicting serious and potentially irreversible damages to humanity, and we should act accordingly. ”
— Ahmet Üzümcü, former Director-General of the Organization for the Prohibition of Chemical Weapons (OPCW), 2013 Nobel Peace Prize laureate.
“In its long history, humanity has never encountered an intelligence superior to its own. Within a few years, it will. But we are far from being prepared for it in terms of regulations, safeguards, and governance. ”
— Csaba Kőrösi, former President of the United Nations General Assembly.
The current race towards ever more powerful and autonomous AI systems represents a major risk to our societies, and international collaboration is urgently needed to address it. Defining red lines is a crucial step to prevent serious risks related to AI. ”
— Yoshua Bengio, Turing Award winner (2018)
In recent years, a growing consensus has formed among experts about the risks posed by AI. Some of these constitute particularly serious threats, the likelihood of which is rapidly increasing as the technology advances. The Glboal Call aims to ban AI capabilities and uses that pose universally associated risks.
Here are examples of red lines likely to garner broad international support:
Faced with these global and irreversible threats, international cooperation is the only effective strategy. Drawing inspiration from landmark agreements such as the Treaty on the Non-Proliferation of Nuclear Weapons (1970) and the Biological Weapons Convention (1975), the Global Call stresses that even rival nations share a common interest in preventing catastrophes with potentially irreversible and border-transcending consequences.
While several countries have enacted legislation in recent years, such as the EU AI Act, these national frameworks are insufficient due to their limited geographical scope and the absence of binding verification and enforcement mechanisms at the global level.
“The development of highly capable AI could be the most important event in human history. It is imperative that world powers act decisively to ensure it is not the last. ”
— Stuart Russell, Professor of Computer Science at the University of California, Berkeley
The Global Call was launched in a context of growing international awareness of the dangers of AI and the need for its regulation. The principle of binding limits for AI had never been discussed so openly at this diplomatic level before September 2025.
During a historic UN Security Council session on AI, Secretary-General António Guterres made a firm call to action to ban lethal autonomous weapons systems operating without human control. Following this, AI pioneer Yoshua Bengio took the floor to present the Global Appeal to establish red lines.
A consensus on the need for clear international limits is beginning to emerge. China has stated: “It is essential to ensure that AI remains under human control and to prevent the emergence of lethal autonomous weapons that operate without human intervention.” France affirmed that “no life-or-death decision should ever be transferred to an autonomous weapon system operating without any human control.” The United States, while rejecting the idea of “centralized global governance,” acknowledged the need for international cooperation to address global threats, committing in particular to developing “an AI verification system that everyone can trust” to enforce the Biological Weapons Convention.
This momentum has been reinforced by converging initiatives, such as the open letter “Fraternity in the Age of AI” launched by experts commissioned by the Vatican, which also calls for setting fundamental limits so that AI serves “all of humanity.”
The Global Call has been launched and heard at the highest level. To translate this momentum into concrete action, CeSia, alongside its partners, aims to help drive a diplomatic process to reach an agreement by the end of 2026.
Numerous opportunities can be explored. The new United Nations Independent Scientific Panel on AI could establish a working group to technically define these prohibitions. The AI Impact Summit in India in February 2026 presents an opportunity for states to endorse an initial set of standardized red lines, building on the commitments already made by the industry at the Seoul AI Safety Summit. The UN Global Dialogue on AI Governance could lead a consultation with scientists, civil society, and industry to refine these red lines by mid-2026. Various intergovernmental platforms could carry a draft international agreement.
The appeal has received extensive international media coverage, with articles in more than 300 publications worldwide, including The New York Times, The World, TIME Magazine, El Country, Associated Press (AP), NBC News, The Verge, India Today, and Euronews.
For more detailed answers on what red lines are, their importance, concrete examples, and their enforcement mechanisms, we invite you to consult the full FAQ available on the Global Call website.