An open letter by AI practitioners, researchers, and professionals working across the tech industry, calling for urgent responsibility in how AI and cloud infrastructure are deployed. This is a global call to action. If it speaks to you, read below.
Clause 0: An Open Letter from Professionals Working Across AI
We are the Ethical AI Alliance: a global network of AI policy leads, human rights lawyers, chief ethics officers, compliance directors, researchers, and technologists. Together, we work across governance, risk, and oversight of AI systems. Many of us have worked -or continue to work- inside or alongside the world’s largest technology companies. We are committed to advancing AI governance through a compliance-driven, internationally aligned approach, grounded in legal, ethical, and human rights frameworks.
Today, we are raising a signal.
As AI systems and cloud infrastructure are increasingly embedded in conflict zones, the frameworks we rely on — from the EU AI Act, the OECD Principles, and the UN Guiding Principles on Business and Human Rights, to the UNESCO Recommendations on the Ethics of AI and the Montréal Declaration on Responsible AI — are not holding. The use of AI systems in war zones, deportation infrastructure, digital surveillance, and population control now poses a credible risk to democracy, the rule of law, and fundamental human rights.
This letter arises in the context of the ongoing use of AI and cloud infrastructure in Gaza — a situation currently under investigation by both the International Criminal Court (ICC) and the International Court of Justice (ICJ). It also reflects broader concerns over the role of AI in systems of displacement, surveillance, and population control — harms that are increasingly embedded in cross-border infrastructures with limited accountability.
We believe this gap demands a response — not from outside protest, but from within the field.
Clause 0 is what should have been written — the foundational ethical safeguard that precedes all others. It is the limit that must not be crossed: that the use of AI, data, and cloud infrastructure must never enable or escalate unlawful violence, occupation, surveillance, or forced displacement.
We write this as professionals responsible for due diligence, human rights assessments, and ethical oversight — trained to ask: Where does this go? Who does this harm? Who will be held responsible?
And so we call on companies, institutions, and regulators to:
Conduct urgent, independent due diligence on the use of AI systems and infrastructure in conflict zones, especially where international humanitarian law is at risk of being breached.
Suspend or restrict services that are directly contributing to violations of international law or human rights, including but not limited to surveillance, targeting, and profiling systems.
Adhere to principles of transparency and open access to information, including public disclosure of partnerships, deployments, and government contracts involving AI and cloud infrastructure used in active conflict environments.
Require explicit classification of deployments in conflict zones and in regimes violating international law as high-risk within existing AI risk frameworks — including enhanced due diligence, public disclosure, and third-party audits in line with the EU AI Act and international human rights standards.
Include expertise and representation from across civil society and affected communities in audit, review, and risk governance processes — especially those from historically marginalized, surveilled, or occupied regions.
This letter is not a protest. It is a professional act of responsibility — a call to uphold the very frameworks we claim to follow.
The credibility of AI governance depends not on what is written in policy — but on what is done in practice, when it matters most.
We invite AI governance professionals, compliance leads, researchers, and engineers around the world to co-sign this letter — and stand with the Ethical AI Alliance and its global community in reasserting the basic conditions of responsible AI, starting with Clause 0.