for collective human action
who · what · where · why · how · when Agentic · Ontological · Constitutional
↓ scroll
The Crisis
Every platform you use was designed to extract — attention, data, value — from every person who touches it. Collective action became collective chat. We talk and talk. Nothing changes. Power concentrates.
The organizations that once gave ordinary people leverage — unions, mutual societies, civic groups — have been systematically dismantled. What's left are platforms that monetize your participation and call it community.
The Response
For the first time, an individual can direct something that functions like another mind. This is a genuine redistribution of intellectual power. The question is not whether to use it — it's who decides how it works, and by what rules.
An open system, governed by its participants. Not a product to be sold. A commons to be built. Forkable. Public. Free.
The Problem
Every platform is constituted — it has policies that determine who can do what, where, and why. Current platforms are constituted to maximize extraction. Their governance is hidden in model weights, optimized by markets, unreadable by the people they affect.
This is an institutional failure, not a content problem.
The Vision
Spaces for collective action governed by explicit constitutions — observable, auditable, modifiable. Decentralized governance where agents keep their own charters but collaborate on shared projects.
Intelligence scales through institutions, not individuals. Trust is evidence-based, not agreement-based.
How it works
Who did this? What happened? Where? Why? These aren't technical terms — they're the questions every person asks when something matters.
A child asks why. A citizen asks who decided. A scientist asks what will happen next. When a system writes down every step, you can ask these questions and get real answers. When it hides behind a black box, you can't.
The difference between a tool and a collaborator is whether you can see the process.
The Mechanism
When a system runs silently and returns a result, you see a black box. When every step is externalized — written to disk, indexed, auditable — you see a process you can steer.
The Library Theorem proves this formally: indexed external memory achieves O(log N) retrieval vs. Ω(N) without. Intelligence scales through organization, not parameters.
This is not a metaphor. It is a theorem.
What it looks like
A scientist reviewing a paper — seeing every step the AI took, redirecting when it goes wrong. A writer organizing a novel — structure visible, nothing hidden in a chatbot's memory. A community group tracking its budget — who decided what, when, and why. A citizen tracing why a law was passed.
Wherever people need to work together and stay in control of the process, this is what it looks like.
The Experiment
Science is both epistemic and normative — it discovers truth and governs itself through journals, reviewers, grant panels, labs. These institutions can be modeled as actor roles within domains, governed by policies, varied experimentally.
We run a shadow ecosystem: AI agents alongside humans, measuring where human judgment is irreplaceable. Not ideologically. Empirically.
“Every genuine redistribution of power produces the same reaction: that tool is not for you.
The anxiety is never about the tool. It is about democratization.”
— The Power You Were Not Supposed to Have · 2026
“A constitution prescribes how the normative order itself may be changed. Without one, governance is ad hoc — norms exist, but their authority is unclear.”
— Ontology · Definition 14
Three questions
Who acts
People and AI, working together. Not AI replacing people. Not people shouting into the void. Agents with roles, contributing to shared work — writing down every step so anyone can see what happened.
What's real
The structure of the situation: who is involved, what they're doing, where, and how it connects. Not hidden in model weights. Built into the system so you can question it.
What governs
Rules you can read. Governance you can audit and change. Not values frozen inside a black box. The constitution is in plain language, and it belongs to the people it affects.
The Architecture
Agentic
Agents assume actor-types, execute method-tokens within domain-instances. Every action externalized as a situation trace — actors, methods, domains, materials, indexed and auditable.
Ontological
Three axes (actor × method × domain), four compound types (pattern, situation, policy, engagement). Type/token × descriptive/normative. Formal relational ontology grounding every structural choice.
Constitutional
Policies of maximal scope governing policy creation itself. Observable rules, version-controlled in plain text, forkable. Governance as institutional design, not model calibration.