Utopian governance using generative AI
Electrohabermas, digital deliberation, platform democracy
2025-10-27 — 2025-11-16
Wherein deliberative experiments with a Habermas Machine are described, and personalized agent‑fiduciaries are proposed to be used to negotiate on citizens’ behalf to generate platform policies by mini‑publics.
The counterpart to AI disempowerment of humans is Utopian governance enabled by generative AI. What’s the best, kindest and wisest collective behaviour we could achieve if generative AI helped govern? Would discussion help us?
This isn’t the same as wondering how we might democratize AI—that’s interesting too.
1 Habermas machine experiment
Ekeoma Uzogara’s summary of Tessler et al. (2024):
To act collectively, groups must reach agreement; however, this can be challenging when discussants present very different but valid opinions. Tessler et al. (2024) investigated whether artificial intelligence (AI) can help groups reach a consensus during democratic debate (see Nyhan and Titiunik (2024) ). The authors trained a large language model called the Habermas Machine to serve as an AI mediator that helped small UK groups find common ground while discussing divisive political issues such as Brexit, immigration, the minimum wage, climate change, and universal childcare. Compared with human mediators, AI mediators produced more palatable statements that generated wide agreement and left groups less divided. The AI’s statements were more clear, logical, and informative without alienating minority perspectives. This work carries policy implications for AI’s potential to unify deeply divided groups.
See also (Hernández 2025; Volpe 2025).
2 Platform democracy
The Challenge: Who decides? (On divisive platform policies)
- Complex policy issues: Online platforms must make policy decisions around controversial issues such as content moderation, political advertising, recommendations, and privacy.
- Deciders often compromised: Currently, either platform CEOs (and their teams) ultimately determine platform policy or powerful governments do; often neither is rewarded by serving the public.
- Negligible public mandate: The public is continually impacted by these decisions and cares about their downstream outcomes (e.g. censorship, misinformation, violence, surveillance), but their perspectives are rarely incorporated (beyond one-sided studies).
- Platforms are stuck: Even platform CEOs often don’t want to be held responsible for these decisions—there may be no action that ‘looks good’ or which can forestall retaliation from partisan politicians or governments.
- No obvious alternative: Even within functional democracies, governments are often limited constitutionally or by partisan gridlock. Platform-based referendums have been attempted, but had negligible response rates.
The Context: New democratic mechanisms have handled tough issues at national scale.
- New democratic decision-making processes have now been shown to makethoughtful decisions and be broadly trusted1 , without most of the damaging political dynamics of referendums and elections, and for a tiny fraction of the cost.
- When designed well, these processes can work even when no existing powerful actor is trustworthy and when no one wants to be held responsible for a decision.
- They often involve creating a demographically representative “mini-public” that is compensated for a fixed time period to learn about an issue from the many multi-stakeholder perspectives, deliberate together, and voice their conclusions.
- This may seem idealistic and implausible. But these new “representative deliberation processes” have now beenused to support complex policy-making around the world, tackling issues from abortion in Ireland to nuclear power in South Korea.
The Opportunity: Platforms can use these processes to tackle controversial issues.
- Platforms working with governments, civil society, can have experienced and neutral facilitators deploy these new processes for the toughest policy questions.
- Policy decisions will then be made by the impacted populations and informed by key stakeholders, often leading to a strong public mandate (which may even help defend against partisan or authoritarian overreach).
3 Agent Economies
Has Hayek’s dream been realized at last?
Seb Krier argues in Coasean Bargaining at Scale that a Coasean Singularity is arriving:
[…] consider AGI deployed as a vast ecology of personalized agents and systems. This emerging ecosystem is what Tomašev et al. (2025) characterize as the “virtual agent economy” a new economic layer where agents transact and coordinate at scales and speeds beyond direct human oversight. While this ecology will contain countless specialized agents, let’s focus on the one that matters most from an individual’s perspective: your personal advocate. Think of it as a fiduciary extension of yourself: a tireless, extremely competent digital representative, closely tied to you, its principal.
What could such an agent do? In principle, it can negotiate, calculate, compare, coordinate, verify, monitor, and much more in a split second. Through many multi-turn conversations, tweaking knobs and sliders, and continuous learning, it could also develop an increasingly sophisticated (though never perfect) model of who you are, your preferences, personal circumstances, values, resources, and more. This should evolve over time - an agent’s alignment should follow the principal’s own evolution. Recent research (Goyal, Chang, and Terry 2024) on negotiation agents finds that “human-agent alignment” is profoundly personal. Users expect agents to not only execute goals but also embody their identity, requiring alignment on everything from preferred negotiation tactics to personal ethical boundaries and the specific public reputation they wanted to project. There are of course important privacy considerations here, but none of these seem fundamentally intractable. For example these systems could be built on technologies like zero-knowledge proofs and differential privacy, ensuring that preferences are communicated and aggregated without revealing sensitive underlying data.
See also Shahidi et al. (2025).
4 Political economy of cognition
See political economy of cognition for foundational theory about what decision-making looks like in the age of AI.
5 As an epistemic problem
TODO [TODO clarify]
6 Incoming
There are many more ideas on the generic utopian governance page that don’t depend on AI, though they could still help us.
Joshua Tan is head of research at Metagov. I’m keen to see what the organization does next.
-
A global network and crowdsourcing platform for researchers, educators, practitioners, policymakers, activists, and anyone interested in public participation and democratic innovations
Theory, methods, and case studies — not necessarily AI-heavy.
Plurality: The Future of Collaborative Technology and Democracy
-
Aviv’s primary focus is on ensuring that the governance of AI can keep up with the rate of AI advances, building on lessons from applied deliberative democracy to enable effective transnational governance and alignment. This involves framing (e.g., “Platform Democracy”), theory (e.g., “Generative CI”), and applied work: accelerating efforts to build out and pilot the organizational and technical infrastructure for deliberative governance (formally or informally advising efforts at Meta, Twitter, and OpenAI).
Reimagining Democracy for AI (in the “Journal of Democracy”)
-
A global network and crowdsourcing platform for researchers, educators, practitioners, policymakers, activists, and anyone interested in public participation and democratic innovations
Plurality: The Future of Collaborative Technology and Democracy
-
The claim: government’s just an information-processing machine.
