Every AI system has a constitution. Not always written down. Not always called that. But every system that accepts some inputs and rejects others, that responds one way rather than another, that has boundaries on what it will and won't do — that system is constituted. It operates under rules.

The question is: who wrote those rules? And can anyone else read them?

The current situation

Right now, the constitution of every major AI system is written by its manufacturer. Anthropic writes Claude's constitution. OpenAI writes GPT's rules. Google writes Gemini's. These constitutions live partly in system prompts (readable, in principle) and partly in training data and RLHF reward models (unreadable, in practice). The public cannot inspect them, modify them, or appeal their application.

This isn't a conspiracy. It's a design choice that follows from the current business model: the company builds the product, the company sets the rules. But it creates a structural problem. The rules that govern how AI systems behave — what they refuse, what they emphasize, whose values they reflect — are set by a small number of private organizations, with no democratic input from the people those systems affect.

Constitutional AI (Bai et al., 2022) was a genuine step forward: making the principles explicit rather than implicit. But explicit principles written by one company are still one company's principles. The question of who decides remains unanswered.

The proposal

What if the constitution were written by the people it governs?

This is not a new idea in political theory. It's the foundational idea: a legitimate constitution derives its authority from the consent of the governed. Citizens' assemblies, constitutional conventions, participatory budgeting — these are all mechanisms for making governance rules through democratic participation rather than executive decree.

We propose applying these mechanisms to AI governance. Not as metaphor. As implementation.

The system we describe has three components:

Explicit policies in plain text. Every behavioral rule is written as a policy document — version-controlled, auditable, forkable. Not hidden in reward models. Not baked into weights. Readable by anyone who can read.

Democratic authorship. Policies are drafted, debated, and ratified through participatory processes. Citizens' assemblies for major constitutional decisions. Community input for domain-specific policies. Expert panels for technical safety constraints. Different levels of governance for different scopes of impact.

Federalism. Different communities can run different constitutions. A medical AI and a creative writing AI don't need the same rules. A children's educational tool and an adult research tool shouldn't have the same boundaries. The architecture supports multiple constitutions coexisting, with shared infrastructure and distinct governance.

The mechanism

The technical architecture uses three primitives: a function (the language model), recursion (agents that can invoke other agents, forming a directed cyclic graph), and a store (git branches that hold the policies, the conversation history, and the audit trail).

Policies operate at three levels. Hard gates are binary: the system will not do X, period. Soft policies are guidance: prefer Y over Z, with exceptions. Constitutional policies are meta-rules: how policies themselves get created, modified, and retired.

The key insight is that policies are data, not code. They live in the same indexed store as everything else. They're subject to the same ontological treatment — a policy is an entity that belongs to a governance domain with a normative quality. This means the system's governance is visible to the system itself, and to anyone who inspects it.

Why a law journal

This paper isn't about AI architecture. It's about institutional design. The question of who writes the rules that govern powerful systems is a question of constitutional law, democratic theory, and political legitimacy. It belongs in conversation with legal scholars, political theorists, and governance practitioners — not just machine learning researchers.

The technical community can build the mechanism. But the question of who gets to use it, and how, is not a technical question. It's the oldest question in political theory: who governs?

A constitution that cannot be read by the people it governs is not a constitution. It is a terms-of-service agreement.

The connection to Zephyr Teachout

Zephyr Teachout's work on corruption and democratic governance provides the legal and political framework this paper needs. Her argument: concentrated private power corrupts democratic institutions not through bribery but through structural capture — the gradual replacement of public governance with private rule-making. AI governance is the newest front in this centuries-old struggle.

The technical architecture exists. The democratic theory exists. What's missing is the bridge. This paper is that bridge.


Architecture formalized as the Filix Mesh (directed cyclic graph with policy-governed nodes). Constitutional framework draws on Teachout's democratic theory and citizens' assembly methodology. Target venue: FAccT or law journal.