Utopian governance using technology, inc generative AI

Electrohabermas, digital deliberation, platform democracy

2025-10-27 — 2026-03-17

Wherein Hayek’s dispersed signals are recast as personal advocate agents, by whom contracts are bargained at scale; privacy is guarded by zero‑knowledge proofs, and a Coasean Singularity is contemplated.

adversarial
AI safety
bounded compute
communicating
cooperation
culture
economics
extended self
faster pussycat
incentive mechanisms
institutions
language
machine learning
markets
mind
money
neural nets
NLP
security
technology
wonk
Figure 1

Could Hayek’s dream of distributed information flows through the economy be put into practice in a more humane way using AI agents?

This is the all-in version of civic tech: we lean into massive-scale agent management.

I have many thoughts about the risks and opportunities here. For now, just a placeholder.

1 Coasean Singularity

Seb Krier argues in Coasean Bargaining at Scale that a Coasean Singularity is arriving.

[…] consider AGI deployed as a vast ecology of personalized agents and systems. This emerging ecosystem is what Tomašev et al. (2025) characterize as the “virtual agent economy” a new economic layer where agents transact and coordinate at scales and speeds beyond direct human oversight. While this ecology will contain countless specialized agents, let’s focus on the one that matters most from an individual’s perspective: your personal advocate. Think of it as a fiduciary extension of yourself: a tireless, extremely competent digital representative, closely tied to you, its principal.

What could such an agent do? In principle, it can negotiate, calculate, compare, coordinate, verify, monitor, and much more in a split second. Through many multi-turn conversations, tweaking knobs and sliders, and continuous learning, it could also develop an increasingly sophisticated (though never perfect) model of who you are, your preferences, personal circumstances, values, resources, and more. This should evolve over time - an agent’s alignment should follow the principal’s own evolution. Recent research (Goyal, Chang, and Terry 2024) on negotiation agents finds that “human-agent alignment” is profoundly personal. Users expect agents to not only execute goals but also embody their identity, requiring alignment on everything from preferred negotiation tactics to personal ethical boundaries and the specific public reputation they wanted to project. There are of course important privacy considerations here, but none of these seem fundamentally intractable. For example these systems could be built on technologies like zero-knowledge proofs and differential privacy, ensuring that preferences are communicated and aggregated without revealing sensitive underlying data.

See also Shahidi et al. (2025).

The argument is that a sufficiently advanced economic negotiation might be indistinguishable from democratic consensus-building, and that the economic and political spheres might merge in a Coasean Singularity—or at least that this is a possible future.

2 “Meaning economy”

A slightly more galaxy-brained version: price signals and contracts are the medium of communication, and the goal is to coordinate economic activity in such a way that it maximizes something richer than the classical revealed preferences of neoclassical economics.

A related concept from Joe Edelman (of Couchsurfing fame): markets align with “deep human values”.

3 Thick models of value

I feel like the thick models of value concept might need to be pried apart from the specific institutional design of market intermediaries, but it’s worth noting that the two are related.

Full-Stack Alignment: Co-Aligning AI and Institutions with Thick Models of Value:

Full-Stack Alignment is a collaborative project led by the Meaning Alignment Institute and a small group of outside researchers. Our goal is to align AI and institutions with what people value, from each individual’s pursuit of their vision of the good life to the collective achievement of shared values and ideals. In other words, we want AI systems and institutions that fit human values and sociality well, where the AI systems and their institutions are compatible.

We argue that our current societal stack is misaligned in many places; we have markets that favor things that are addictive and isolating, and our democratic institutions are polarizing us. Things will get worse as AI displace workers entirely, outspeed regulation, and outmanouver lawyers. In our near-term future we risk sudden or gradual disempowerment as our economic and democratic agency erodes.

This challenge may seem insurmountable. We disagree, and argue this intractability stems from how we model what humans care about in our AI, markets and democracies—in our position paper, we summarize these approaches as “Preferentist models of value” (PMV) and “Values-as-text” (VAT). Both of these fail to capture the richness of human motivation. Consequently, a desire for “meaningful connection” becomes “engagement metrics” to recommender systems, which becomes “daily active users” to companies, and “quarterly revenue” in markets. Instead, we propose a new paradigm — “Thick models of value” (TMV). This is an emerging field, but new research shows great promise and we are optimistic about using it to achieve Full-Stack Alignment.

4 References

Burton, Lopez-Lopez, Hechtlinger, et al. 2024. How Large Language Models Can Reshape Collective Intelligence.” Nature Human Behaviour.
Edelman, Tan, Lowe, et al. 2025. Full-Stack Alignment: Co-Aligning AI and Institutions with Thick Models of Value.” In.
Franklin, and Ashton. 2022. Preference Change in Persuasive Robotics.”
Franklin, Ashton, Gorman, et al. 2022. Recognising the Importance of Preference Change: A Call for a Coordinated Multidisciplinary Research Effort in the Age of AI.”
Gabriel. 2020. Artificial Intelligence, Values, and Alignment.” Minds and Machines.
Gabriel, Manzini, Keeling, et al. 2024. The Ethics of Advanced AI Assistants.”
Goyal, Chang, and Terry. 2024. Designing for Human-Agent Alignment: Understanding What Humans Want from Their Agents.” In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems.
Kim. 2020. Deep Learning and Principal–Agent Problems of Algorithmic Governance: The New Materialism Perspective.” Technology in Society.
Klingefjord, Lowe, and Edelman. 2024. What Are Human Values, and How Do We Align AI to Them?
Levine, Chater, Tenenbaum, et al. 2024. Resource-Rational Contractualism: A Triple Theory of Moral Cognition.” Behavioral and Brain Sciences.
Shahidi, Rusak, Manning, et al. 2025. The Coasean Singularity? Demand, Supply, and Market Design with AI Agents.” In. Working Paper Series.
Tomašev, Franklin, Leibo, et al. 2025. Virtual Agent Economies.”
Zhi-Xuan, Carroll, Franklin, et al. 2025. Beyond Preferences in AI Alignment.” Philosophical Studies.