Civil society and AI safety
movement building, field building, power building, in-fighting
2024-09-30 — 2025-10-13
Wherein the possibility of an AI‑safety social movement is examined, tracing movement theory, risks of fragmentation, and proposals such as citizens’ assemblies and watchdog infrastructures.
2 Theories of Change
The Social Change Lab’s blog post frames the AI safety problem as a massive imbalance: the capabilities and resources of AI development (private labs, venture capital, state actors) far exceed the organizing capacity of publics, NGOs or activist infrastructure. Their implicit theory of change is:
- Make currently opaque AI risks legible to broader publics (i.e. bring harms, probabilistic risk, governance failure into view).
- Mobilise legitimacy, moral pressure, narrative, and political leverage so that labs and regulators respond.
- Build durable institutions and counterweights (watchdog, civic oversight, public participation) that can channel that pressure over time.
They don’t lay out a detailed stage model (e.g. recruitment → disruption → policy uptake), but the categories of strategy (protest, narrative, field-building, deliberation) echo classic tactics in social movement theory. So, let us name-check some interesting theories that might help design a movement for AI safety.
2.1 Resource Mobilization / Entrepreneurial Theory
Pioneered by McCarthy & Zald (McCarthy and Zald 1977), this approach treats movements as rational actors needing resources (money, personnel, legitimacy, networks) to act. It shifts the focus from grievances or moral outrage to capacity: you may have a compelling cause, but without infrastructure, you can’t sustain collective action.
- Advantages: Explains why well-funded, well-networked groups tend to persist and scale.
- Critiques: Tends to underplay culture, identity, or grassroots agency; can over-emphasize professionalisation. Some movements succeed with lean resources (e.g. decentralized digital campaigns).
In the AI context: the mapping highlights how thin movement infrastructure is. If we overestimate public altruism or underinvest in connective tissue (platforms, staff, coordination), we risk collapse.
2.2 Political Process / Political Opportunity
This school (McAdam 1982; Tilly 2004) adds the dimension of opportunity windows: elites may split, crises may open legitimacy gaps, or institutional thresholds may shift. Movements succeed not just by having resources, but by acting when constraints momentarily loosen. In the AI domain, one might see moments when a lab’s failure or scandal breaks public trust, or when a policy window opens (a legislative hearing, a regulatory mandate). But those windows are narrow and unpredictable.
2.3 Framing Theory / Cultural Work
Snow and Benford (1988) and others argue that movements succeed or fail depending on how well they frame problems (diagnosis, moral attribution, solution). Even if you have resources and opportunity, if your narrative doesn’t resonate—or worse, alienates—you stall. Applying to AI: “AI is inevitable and unstoppable” is the default narrative. That framing supports inertia. If the movement can reframe AI as a political choice, not a destiny, it may shift how people engage. But catastrophic framings (extinction risk) can backfire by raising fatalism or distrust, as some experimental research in climate messaging suggests.
2.4 Movement Ecology / Ecosystem Models
In recent years, activists and scholars talk of ecologies: a diversity of groups with different roles (advocacy, litigation, protest, research, local organising) coexisting and interacting. The idea is that no one strategy dominates; the strength lies in the networked interplay. (See Ayni Institute / Open Philanthropy discussion on movement ecology) The ecological metaphor has critics: boundaries blur (who is inside the movement?), agency becomes diffused, and coordination is nontrivial (see “boundary,” “agency,” “interaction” problems). In AI safety, you might already see an incipient ecology: protest groups, policy institutes, public education projects, watchdog labs. The trick is whether they mutually reinforce (rather than fragment or compete).
2.5 Organizer Models
Marshall Ganz’s “organizing” perspective emphasizes leadership, narrative, relational ties, and distributed leadership over hierarchical control. He sees movements as building power, not just contesting it. (See interview “Are You Building Something?”) . In AI: grassroots buy-in will require narrative coherence and relational organizing, not just top-down statements from experts. You may need “organizers” who can help connect technical researchers with civic actors, not simply more memos.
3 Careers in AI Safety
Much to say. for now see AI safety careers.
4 Anti-TESCREALism as a Case Study in Movement Fragmentation
Coverage of a recent attempt to start a new battle front in the forever culture war. You might want to skip reading about it unless you enjoy culture wars or are caught up in one of the touch-points of this one. Otherwise I would avoid feeding the attention/engagement cycle monster.
IMO AI Safety does not “need” to be partisan on its own merits, any more than COVID policy or climate change “needed” to be partisan. But we make things partisan these days to get engagement and social media clicks, in this thing as many others.
Public debate over AI risk has recently been punctuated by the emergence of the acronym TESCREAL (Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, Longtermism) coined by Timnit Gebru and Émile P. Torres. ([Wikipedia][1]) Gebru and Torres argue that these movements form an overlapping “bundle”, rooted in techno-utopian thought, which uses existential risk narratives to justify speculative high-stakes interventions.
Because few outside the inner circles of “rationalist / EA / futurist” discourse encountered the term, the TESCREAL label functions partly as a sorting device—a quick tag for “those who endorse extreme futurism and technocratic risk arguments” in opposition to more justice- or ethics-oriented critics.
4.1 What TESCREAL does
- “Othering” TESCREAL often acts less as precise critique and more as boundary marker: it signals “this is not us” in debates over AI (much like how woke, SJW, cultural Marxist are used in broader culture war settings).
- Simplification of pluralism. It flattens a diverse spectrum of positions (e.g. an EA working on malaria has little in common with a singularitarian) into one antagonistic “side.”
- Incentives of performative polarization. Once an issue is sorted along identity lines, winning the battle becomes more valuable than resolving the technical problem.
In other words: TESCREAL illustrates the mechanics of movement fragmentation and divide-and-rule in real time.
We have seen this kind of thing before: Consider anthropogenic climate change. What began as a scientific concern gradually became a cultural signifier. Political entrepreneurs and media ecosystems recognized the mobilizing power of the issue. Fossil fuel industries, seeking to avoid regulation, often funded skepticism, which was amplified in conservative media. Over time, belief in anthropogenic climate change became less correlated with scientific literacy and more correlated with political affiliation. The issue was reframed not as a collective environmental challenge, but as a zero-sum conflict between economic liberty and government overreach, solidifying the partisan divide.
COVID-19 policy was very similar.
Under polarization, concerns about ASI, existential risk (X-risk) get assigned to the “TESCREAL” or “Techno‑Capitalist” side. Conversely, concerns about algorithmic bias, energy consumption, labour displacement, and corporate accountability are assigned to the “Social Justice” or “AI Ethics” side. If this division were to harden it would reduce the ability of researchers to collaborate across domains, and of policymakers to address the full spectrum of AI risks in a coherent manner.
4.2 Connecting to movement theory
In social movement scholarship, cohesion and fragmentation are recurring hazards as movements scale. Collective identity, shared framing, and ritualized boundary work are essential to prevent implosion (or being co-opted) (Peters 2018).
But “coherence” is fragile. As coalitions incorporate more actors, strategic and ideological tensions accumulate. That’s when labels, purism tests, or rhetorical bundles (like TESCREAL) tend to emerge. Many movements in history have died of internal splits long before achieving institutional change (e.g. the decline of the U.S. civil rights coalition post-1968, the splintering of 20th-century feminist waves, or schisms in radical environmentalism).
Within framing theory, a movement’s success depends on maintaining a narrative that is inclusive enough to hold alliances but sharp enough to mobilize. The introduction of TESCREAL is a framing move that cuts a movement into two camps and sets them against each other.
Finally, from a processual view (stages of social movements), fragmentation is often the fourth phase: incipiency → coalescence → institutionalization → fragmentation (Hiller 1975) the attempt to build a divide around Anti-TESCREALists might be a symptom that the AI safety ecosystem is entering that danger zone of internal sorting and conflict.
4.3 Risks and lessons for AI safety
- Shrinking the coalition. If “TESCREAL vs. anti-TESCREAL” becomes the dominant axis, actors who have cross-cutting interests, e.g. opposing over-acceleration but still care about X-risk may be forced to pick sides—or be silenced by associations.
- Reductionism of policy space. Complex trade-offs (bias, labour, local harms, long-term alignment) get sorted into camps and then simplified into binary slogans.
- Overvaluing theatrics. The polarization dynamic rewards sharp calls, ridicule, exclusion and other engagement-pumping behaviour—less so careful synthesis or cross-cutting engagement.
So what should one do, in a pluralist field where critique is necessary but collapse of coalition is fatal?
- Prefer connective critique to sweeping rejections. Resist the temptation to brand entire clusters as “evil.”
- Emphasize shared purpose and modular trust. Build bridges over specific projects or domains, leaving space for disagreement elsewhere.
- Metacognitive boundary work. Be explicit about when you are naming a critique of a policy position vs. stamping identity tags.
- Guard narrative pluralism. Encourage multiple frames (immediate harm, existential risk, institutional resilience) to coexist rather than battle prematurely.
If TESCREAL is a sorting device more than an analytical lens, then the fight is not merely about correctness but about preserving the possibility of a unified movement. The worst outcome would be that our internal battles drown out the external crisis.
5 Some Emerging AI-Safety Civil Society Actors
Mostly recycled from the Social Change Lab map but I plan to extend it.
5.1 Protest, disruption, mobilisation
Groups using direct action and mass visibility to pressure labs and governments.
- PauseAI: Decentralised network demanding a pause on frontier-model training until enforceable safety standards exist; noted for disciplined, media-friendly protests.
- Stop AI: Civil-resistance organisation calling for an outright ban on artificial general intelligence; embraces disruptive tactics to dramatise existential risk.
- People vs Big Tech: Youth-driven campaign linking AI harms to social-media dysfunction, surveillance, and concentration of tech power.
- Control AI: UK-based initiative urging public oversight of “frontier” AI; has gathered tens of thousands of signatories on open safety pledges.
5.2 Narrative-shaping and watchdogs
Actors exposing current harms and contesting industry framing.
- Algorithmic Justice League: Joy Buolamwini’s research-storytelling project documenting racial and gender bias in facial recognition.
- DAIR Institute: Timnit Gebru’s independent lab producing community-rooted empirical research on AI’s social impact.
- Fight for the Future: Veteran digital-rights NGO deploying rapid online mobilisation; expanding its scope from surveillance to generative-AI governance.
- AI Now Institute: Academic centre analysing corporate AI power and advocating for democratic accountability in tech governance.
5.3 Public literacy and democratic engagement
Groups focused on public understanding, participation, and deliberation.
- We and AI: Volunteer UK collective running workshops and partnerships to build inclusive AI literacy.
- Connected by Data: Policy nonprofit reframing data and AI governance as democratic—not purely technical—questions.
- CivAI: U.S. nonprofit using hands-on demos (deepfake creation, phishing simulations) to teach critical awareness of AI misuse.
- Democracy Next / Citizens’ Assemblies on AI: Facilitators of deliberative mini-publics producing policy recommendations on AI oversight.
- Global Citizens Assemblies: Prototype for transnational deliberation treating AI governance as a global commons issue.
5.4 Infrastructure, field-building, and lobbying
Institutions providing connective tissue, research capacity, and policy leverage.
- Ada Lovelace Institute: Independent think-tank convening cross-sector dialogue and generating policy research on equitable AI.
- AI Incident Database: Open repository of real-world AI failures enabling transparency and pattern analysis.
- Data & Society / Future of Life Institute / Humane Intelligence: Research and convening hubs connecting academic, activist, and governance communities around AI risk.
- Stop Killer Robots: Global coalition campaigning for binding international law against autonomous weapons; a model of issue-specific global coordination.
6 Incoming
- The “TESCREAL” Bungle
- Segmentation faults: how machine learning trains us to appear insane to one another.
- Semi-counterpoint: Doctorow on Big Tech narratives.
- Steven Buss, Politics for Software Engineers, Part 1, Part 3
1 Social license for AI safety
Amazing under-regarded IMO, probably because of the technical routes of many of the first movers.
If AI safety is to become not just a niche technical concern but a society-wide governance contest, we need to see it through the lens of movements, not only research or mechanism design. The Social Change Lab map offers a useful breakdown of actors and risk zones, but the real question is causal: what moves the needle? Social movement theory tells us movements need resources, narrative work, opportunistic timing, and ecosystem-building. But the terrain is hostile: public awareness is low, the technology seems exotic and remote, regulatory inertia is baked in, and commercial interests are deep.
What would it mean for a mass movement to intervene effectively — and how plausible is it? Below is a sketch of how social movement theory might frame an “AI safety movement,” and what that suggests for strategy.
There are some interesting theories of change that might be relevant. It’s possible some campaigns will fail spectacularly, or fragment; that’s the nature of movement building in domains with high uncertainty and low salience (see analogies in early climate activism, vaccines policy, DDT regulation).