Manufacturing partisan divisions in AI safety

Post-rationalism for non-post-rationalists

2024-09-30 — 2025-09-25

Wherein the term TESCREAL is catalogued as an examplar of our need to pick battles lines about problems rather than solve them together and that curious phenomenon bemoaned.

adversarial
AI safety
economics
faster pussycat
innovation
language
machine learning
mind
neural nets
NLP
technology
wonk
TipAttention conservation notice

Coverage of a recent battle in the forever culture war. You might want to skip reading about it unless you enjoy culture wars or are caught up in one of the touch-points of this one, such as AI risk or Effective Altruism.

My read of the situation is that this term is the early stage of a trajectory towards making something partisan that is not yet partisan, and that IMO does not “need” to be partisan on its own merits. But we make things partisan these days.

Figure 1: Go on, buy the sticker

Public discourse in the West appears to be governed by an accelerating social mechanism—a Polarization Machine—that systematically transforms complex, nuanced societal challenges into binary, partisan conflicts. Issues that initially demand broad consensus and expert navigation invariably become litmus tests for in-group affiliation.

We all know that much — though it’s hard to see from inside the machine; I always notice when I get stuck in it myself. The process of making something happen in a modern democracy seems to be largely about coalition dynamics: we have to figure out which party will win, and how to get them to support our cause.

Here I want to track this process in AI risk policy, exemplified by the debate over “TESCREAL.”

1 Mechanics of the Polarization Machine

Consider anthropogenic climate change. What began as a scientific concern gradually became a cultural signifier. Political entrepreneurs and media ecosystems recognized the mobilizing power of the issue. Fossil fuel industries, seeking to avoid regulation, often funded skepticism, which was amplified in conservative media. Over time, belief in anthropogenic climate change became less correlated with scientific literacy and more correlated with political affiliation. The issue was reframed not as a collective environmental challenge, but as a zero-sum conflict between economic liberty and government overreach, solidifying the partisan divide.

The COVID-19 pandemic offers a compressed example. Initial responses to lockdowns saw relative unity. However, interventions like masking and vaccination became rapidly polarized.

In a fragmented media landscape, algorithms often prioritize engagement, which favours outrage and confirmation bias. Public health measures were reframed as assaults on personal freedom by one side, and as moral imperatives by the other. Identity quickly overtook epidemiology as the primary determinant of behaviour, with devastating consequences for coherent public health policy, trust in institutions, and social cohesion.

The underlying causes of this increasing polarization are multifaceted and the subject of extensive research. I won’t go into them here. Let’s take it as given that “policy debates that survive the media ecosystem” are likely to be polarized. All else being equal, if I’m debating a policy, I’m likely to find myself in a polarized debate, where my in-group believes something I regard as manifestly sensible, and an out-group that I regard as manifestly wrong. Typically the out-group is othered in the discourse.

2 How it breaks in AI safety

The debate surrounding AI risks has been simmering for years, but recent events have brought it to a boil, creating the conditions for partisan sorting.

A significant flashpoint was the contentious departure of Dr. Timnit Gebru from Google in late 2020. Gebru, a prominent AI ethics researcher, co-authored the famous “Stochastic Parrots” paper (Bender et al. 2021) highlighting the risks of Large Language Models, including environmental costs and baked-in biases. The controversial and suspicious circumstances of her firing galvanized a significant segment of the AI research community, highlighting a perceived conflict between corporate interests prioritizing rapid development and researchers focused on immediate societal harms and accountability.

In contrast, figures associated with Silicon Valley venture capital, such as Marc Andreessen, Peter Thiel and Elon Musk, began advocating aggressively for “accelerationism”—the idea that technological progress should be pushed forward as quickly as possible, often explicitly dismissing ethical concerns as impediments to innovation.

It was into this fraught environment that the term TESCREAL was introduced by Dr. Émile P. Torres and then later popularized in Gebru and Torres (2024). The letters stand for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism and Longtermism.

It is demonstrably true that the movements grouped under the acronym are diverse. A garden-variety Effective Altruist focused on malaria prevention in Africa likely shares little ideological ground with a Singularitarian anticipating the merger of human and machine.

TESCREAL seems to function mostly as an othering term. In that regard, it’s like woke, commie, papist, antifa, protestant, cultural Marxist, SJW, colonial, native, Axis of Evil, Asian, African…. The common function it shares with each of these is it categorizes a group of people who do not see each other as fellow travellers, but who the speaker wishes to depict to their own base as outsiders. Such terms need not be particularly well-defined. The main function is to other, to create a category of people who are not like us, who are not to be trusted, who are an out-group.

To me, this category feels gerrymandered. Maybe all otherings feel like that, and this one is only salient to me because I am unaccustomed to being othered, as a guy in the dominant gender and ethnic (etc) category in his local neighbourhood. It is very likely that people in minority categories who are more accustomed to being othered will tell me that this is par for the course.

Nonetheless, it feels preposterous to be on the receiving end of a term that lumps me in with people I find objectionable. That observation leads me to wonder how much mileage I myself can get out of lumping all the movements that have grated me the wrong way in the past together into a single acronym. (“Let me tell you about NIMBYs, coal magnates, liberal scolds, and three-chord punk bands, and how they are all part of the same bundle of malign patsy ideologies, which I call NICOLS3Cpism.”)

At the same time, inside the nebulous and spiky border that TESCREAL draws, there is a lot of moral complexity. It would be naive to assume a collection of communities and movements, some large and influential, are free of problematic elements. There are undoubtedly individuals within these circles who harbour eugenicist views, adhere to extreme forms of utilitarianism that discount present suffering, or are primarily motivated by profit accumulation.1 Critics of TESCREAL are often reacting to real statements, historical connections, or funding sources that they find alarming for good reasons. The polarization dynamic occurs not necessarily because the critique is wholly unfounded, but when these specific, alarming elements are generalized to define the entire “bundle”, thereby establishing the battle lines.

If AI safety becomes a culture war issue, the incentive structure changes. The goal becomes winning the political battle rather than solving the technical and societal problems. This dynamic rewards performative antagonism over careful deliberation. We already see this playing out: the naming of TESCREALism has given reactionary accelerationists a new word to troll their opponents with by claiming the movement that has been named into existence. As Torres points out, for a while there, arch accelerationist Marc Andreessen’s Twitter bio decided to adopt it as a self-description. Someone took the bait and ascribed an attempt to brand anti-TESCREALism as R9PRESENTATIONALism.

Under polarization, concerns about ASI, existential risk (X-risk), and arguments for acceleration are being assigned to the “TESCREAL” or “Techno‑Capitalist” side. Conversely, concerns about algorithmic bias, energy consumption, labour displacement, and corporate accountability are assigned to the “Social Justice” or “AI Ethics” side.

This sorting process simplifies the landscape but distorts reality, and the policy landscape. It forces individuals to choose a “package deal”. If one identifies with the AI Ethics camp, there may be social pressure to downplay X-risk, as it is perceived as the priority of the “other side”. If one is aligned with the accelerationists, there is pressure to dismiss concerns about bias as distractions from progress.

3 Hypotheses and Consequences for AI Safety

If we do face polarization of AI risks, which risks will be assigned to which coalitions? AI risk is not monolithic; it encompasses everything from the societal disruption of deepfakes and autonomous weapons to the potential catastrophe of misaligned superintelligence.

These risks are not mutually exclusive. It is valuable to address racial biases in current LLMs and prepare for the potential of future systems to escape human control. The polarization dynamic, however, encourages us to shrink the coalition advocating for AI safety by framing the debate as a zero-sum conflict between “longtermist” concerns and “immediate harms” and discourages us from exploring the cases where goals overlap.

How might this play out:

  1. Policy Paralysis: Policy proposals regarding AI will become increasingly polarized along existing political lines, making comprehensive regulation difficult to achieve, even where common ground exists.
  2. Extremism: The most extreme voices on both sides (e.g., “doomers” predicting imminent apocalypse and “accelerationists” dismissing all regulation) will gain prominence as they are more effective at mobilizing their respective bases.
Figure 2

4 Pragmatics

When confronted with arguments like those presented by Torres, the instinct among those labelled is often to debunk the “gotchas”—to point out the lack of collusion, the diversity of opinion within the Effective Altruism movement, or the flawed interpretations of Longtermism.

You might wish to read Ozy Brennan’s The “TESCREAL” Bungle which takes a debunking approach and does so very precisely.

While accuracy matters, debunking risks missing the forest for the trees.

I’d be more interested in a strategy that understands the dynamics of polarization itself, and resists the sorting machine directly by understanding how we’re complicit in it.

We live in a populist age where uniting against a common enemy is politically practical, and the existence or coherence of that enemy is secondary.

There are many valid demands from anti-TESCREALists—about corporate accountability, equity, and opposition to unbridled tech-accelerationism.

Yet, if advancing these causes requires manufacturing a partisan divide over the nature of AI risk, we may ultimately undermine the broader goal of ensuring a safe future with AI.

I’m not sure where to go from here.

Intuitively, it seems to me that people concerned about AI worry about broadly the same things, and we haven’t yet reached a point where we need to trade those concerns off against each other or delay action while we have factional fights—we probably largely agree about both our goals and the means to achieve them.

When analyzing the TESCREAL debate, focusing purely on debunking the specifics—arguing whether the bundle is “real” or not—misses the larger trend: debunking risks accepting the premise of the conflict without also analyzing the mechanism that creates it. A more constructive approach requires understanding the dynamics of polarization itself.

While the concerns raised by anti-TESCREAL advocates regarding corporate accountability and equity are vital, relying on a strategy that manufactures a partisan divide over the nature of AI risk is dangerous.

If the Polarization Machine succeeds in sorting AI safety, the result will not be the victory of one camp over the other, but a fractured response to one of humanity’s most significant challenges.

I prefer to prioritise resisting this sorting. That means looking for converts rather than heretics, and maintaining the broadest possible coalition to navigate the future of AI.

I could be on the wrong side of history, however. Maybe fighting about what to do is worth it if it brings about action and change. Maybe setting up battle lines and making us-and-them teams is worth the cost of polarization? We live in a populist age in which uniting against a common enemy might be more important than the enemy existing, and maybe the only way to get things done is to manufacture a partisan divide.

I don’t like the idea of living in that world where we are all playing a game of divide-and-rule against each other in order to keep good policy on the agenda, but maybe there is no way to play the game better?

In that case, I console myself that I may soon be able to rally my troops to crush the NICOLS3Pist enemy within.

5 References

Bender, Gebru, McMillan-Major, et al. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜.” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency.
Devenot. 2023. TESCREAL Hallucinations: Psychedelic and AI Hype as Inequality Engines.”
Gebru, and Torres. 2024. The TESCREAL Bundle: Eugenics and the Promise of Utopia Through Artificial General Intelligence.” First Monday.

Footnotes

  1. Indeed, the current global resurgence of natalist nationalism such as in Hungary suggests eugenicist impulses are hardly confined to techno-utopian circles.↩︎