HAAK is a research project at the intersection of artificial intelligence, formal ontology, and governance. It asks a single question: what institutional architecture do AI systems need in order to reason well, act transparently, and remain under human control?

The project is named for Theodore Haak, who in 1646 began convening the group of natural philosophers that would become the Royal Society. Haak did not discover anything himself. He built the infrastructure—the correspondence network, the meeting schedule, the translation pipeline—that let others discover things together. HAAK takes this as its model: the hard problem is not intelligence but the organization of intelligence.

The work spans formal proof, systems architecture, neuroscience, and political theory. It is grounded in two results proved in Lean 4 (machine-checked, zero axioms assumed beyond the type theory itself): that organized external memory is exponentially more efficient than unorganized memory, and that a single primitive relation is sufficient to reconstruct every relation in standard formal ontologies. From these foundations, the project builds outward toward a constitutional framework for AI governance.

The Research Program

The argument begins with the Library Theorem: a formal proof that indexed retrieval from an external store achieves logarithmic cost where unindexed retrieval requires linear cost, and that over multi-step reasoning the gap compounds from exponential to quadratic. This is not a metaphor. It is a theorem about computation, and it applies directly to any system—biological or artificial—that must reason beyond its immediate working memory.

The ontological foundation is equally precise. Standard formal ontologies (BFO, DOLCE) posit dozens of primitive relations—part-of, located-in, participates-in, has-quality, and so on. We prove that all of these reduce to a single primitive: belongs-to, qualified by the nature of the belonging. Twelve reductions across both frameworks, each machine-checked. The result is not just elegant; it makes ontology computationally tractable for AI systems that need to reason about what exists and how things relate.

A bounded transformer—any model with a fixed context window—is formally equivalent to a finite automaton. It cannot, by itself, perform unbounded computation. What makes it Turing-complete is access to an external read/write store. This is the theoretical link between the Library Theorem and practical AI architecture: the context window is working memory, the external store is long-term memory, and the index is what makes the difference between brute-force search and structured retrieval.

The neuroscience application follows naturally. Working memory maps to the context window. Long-term memory maps to the external store. The hippocampus maps to the index. The brain solved the Library Theorem long before we proved it. This gives extended cognition—the philosophical thesis that minds extend beyond skulls—its first formal foundation.

Applied reflexively to the model itself, the framework asks: what fraction of a model's weights constitute retrievable knowledge rather than irreducible computation? Early results on GPT-2 suggest that roughly 65% of weights qualify for externalization—they encode knowledge that could, in principle, be stored in an indexed external library rather than baked into parameters.

The arc culminates in governance. Every AI system already has a constitution—policies that determine who can do what, under what conditions, with what authority. The question is whether that constitution is legible. Current systems encode their governance in model weights, optimized by markets, unreadable by the people they affect. The alternative is explicit: policies in plain text, version-controlled, authored through participatory democratic process, with federalism across communities. Constitutional AI governance, in other words, modeled on the oldest successful technology for organizing collective action among agents with divergent interests.

The System

HAAK is its own experiment. The research described above was conducted inside a system where AI agents and a human researcher collaborate through an externalized reasoning workspace. Every step—every literature search, every draft, every editorial decision, every architectural change—is written to disk, indexed, and auditable. The human steers; the agents execute; the process is visible.

This is not incidental to the research. It is the research. The Library Theorem predicts that externalized, indexed memory should outperform unindexed memory. HAAK tests this prediction on itself, daily, across real scientific work. The ontology structures the workspace. The constitutional framework governs the agents. The question at the center—where is human participation irreplaceable?—gets answered empirically, one intervention at a time.

Who

Zach Mainen is a principal investigator at the Champalimaud Centre for the Unknown in Lisbon, where he leads the Systems Neuroscience Lab. His experimental work spans two decades of research on serotonin, decision-making, and the neural basis of uncertainty. HAAK represents a second line of work—formal ontology, AI architecture, and institutional design—that grew out of the conviction that neuroscience's deepest questions about memory, reasoning, and agency are the same questions that AI governance must answer.