What we’ve built so far

Over the past year, Ethical AI Alliance focused on building governance foundations before scaling activity. We defined non-negotiable boundaries through Clause 0, organized interdisciplinary sprints, and developed two governance artifacts: an AI Harm Map prototype and a proposal for a systemic, escalation-oriented AI harm reporting mechanism. This work is carried by an active, transdisciplinary collective across multiple regions.

  • We began by creating a shared space and a core, interdisciplinary team. Before building anything, we defined non-negotiable boundaries through Clause 0, establishing clear limits on what this work would and would not engage in.

    We then organized interdisciplinary sprints to explore concrete accountability pathways for AI, working from a shared backlog rather than predefined solutions. From this process, we developed two governance artifacts: an AI Harm Map prototype and a proposal for a systemic, escalation-oriented AI harm reporting mechanism, designed to surface patterns of harm and make governance failure visible.

  • This work is carried by an active, transdisciplinary collective spanning multiple regions and domains.

    To date, more than 1,200 people have signed up to contribute, over 500 have endorsed Clause 0, and around 45 contributors are actively involved. The collective includes practitioners from AI governance, technology, academia, design, human rights, law, and the arts.

  • Existing AI governance mechanisms are largely reactive, fragmented, or inaccessible to those most affected by harm. This work exists to experiment with accountability where those mechanisms fall short.

    Ethical AI Alliance did not begin with a product or a funding stream, but with boundaries, collective judgment, and the question of how harm can be documented and escalated in ways that institutions currently do not support. Funding enables continuity, coordination, and care for this work, rather than initiating it from scratch.