Anti-TESCREALists

Post-rationalism for non-post-rationalists

September 30, 2024 — March 1, 2025

adversarial
economics
faster pussycat
innovation
language
machine learning
mind
neural nets
NLP
security
technology
Figure 1: Go on, buy the sticker
Attention conservation notice

Coverage of a recent battle in the forever culture war. You might want to skip reading about it unless you enjoy culture wars or are caught up in one of the touch-points of this one, such as AI risk or Effective Altruism. Or you might read Ozy Brennan’s more tightly-argued The “TESCREAL” Bungle

TESCREAL is a term, like woke, commie, papist, antifa, protestant, cultural Marxist, SJW, colonial, native, Axis of Evil, Asian, African…. The common feature of all of these terms is that they categorise a group of people who do not see each other as fellow travelers, but who the speaker wishes to depict to their own base as outsiders. Such terms usually start out perjorative, but may be reclaimed by the group it is meant to other. The term need not be particularly well-defined. Often it includes some core constituency, but lumps a lot of other people in with them. It need not be considered useful by people in the groups it claims to refer to. The main function is to other, to create a category of people who are not like us, who are not to be trusted, who are an out-group. That othering function seems to me to be the main function of TESCREAL.

TESCREAL is an acronym for Transhumanism, Extropianism, singularitarianism, cosmism, Rationalism, Effective Altruism and longtermism. It was introduced in an article by Émile P. Torres and given a different spin in Gebru and Torres (). I’m not a fan of the Torres article, which is so messy that I do not think there is much to refute.. Read Gebru and Torres () if you want to get a much crisper version of the case. I wrote a lot of the content here before the Gebru article came out. It is a lot better, so apologies for the failure to engage with the best version of the argument. I will try to do so if I get time.

Either way, I’m not sold myself on the usefulness of TESCREAList, any more than I am sold on SJW or papist. The essential thesis is that there is an underlying connection between these philosophies which is not immediately obvious. If I’ve understood it correctly, it includes some combination of

  1. Eugenics
  2. longtermist utilitarianism
  3. Being a catspaw for large tech companies to escape blame for inequitable social outcomes
  4. Too much focus on the “wrong” parts of AI Safety at the expense of the “right” parts (I think this means overweighting catastrophic risk relative to risks of economic disruption?)
  5. …and maybe the TESCREALists are in some sense colluding?

I would be astonished if there were not eugenicists, weird utilitarians, tech company shills, and people who don’t care about the anxieties of normal people about AI in the movements named, because they are diverse movements with large memberships, and people believe stuff both more and less weird than that. But none of this is the standard fare for a garden-variety effective altruists, who have many qualms about longtermism.

I’m not sure about the last point, the collusion, as in, I’m not sure if it supposed to be implied or not. Torres is at pains to claim the TESCREALists are not colluding, because they are a disconnected ‘bundle’. But then they (Torres) spend a lot of time talking about all the coordinating they are doing of the tech elite; one paragraph connects the dots between various bogeymen, Elon Musk, Nick Bostrom, Will MacAskill, Sam Altman, Jaan Tallinn, Peter Thiel, and Vitalik Buterin, and subsequent authors have claimed TESCREALism is an actual cult. This feels like they would like to have it both ways?

I don’t feel the collusion case is strong. I mean, I bet most of those named people know each other, but what cult would have them all in, and who would clean up after the fighting? And would my local effective altruism meetup welcome those all folks to the vegan morning tea meeting on the topic of reducing child lead poisoning or air pollution?

Figure 2

To me this othering feels like a gerrymandered. Maybe all otherings feel like that, and this is only salient to me because I am used to communities I know so well being othered. This has not stopped the term from attaining some traction.

In the movements they name that I know well (which is not all of them — where are the Cosmists and Extropians and Transhumanists these days?), there is no single view on the topic of AI X-risk, longtermism, or the future in general, nor do they share, say, a consistent utilitarian stance, nor a consistent interpretation of utilitarianism when they are utilitarian, or any of those things. We can draw a murder-board upon which key individuals in each of the philosophies are connected by red string, but it doesn’t seem to be a natural category in any strong sense to me, any more than Axis Of Evil or NICOLS3CP. They are less well connected to each other by any single factor excluding Torres and Gebru’s opposition to them.

That observation leads me to wonder how much mileage I myself can get out of lumping all the movements that have grated me the wrong way in the past together into a single acronym. (“Let me tell you about NIMBYs, coal magnates, liberal scolds, and three-chord punk bands, and how they are all part of the same bundle of malign patsy ideologies, which I call NICOLS3CPism.”)

On the other hand, because the associations do not seem meaningful to me, that doesn’t mean there is nothing to criticise in the various movements. Annoyingly, that Torres article does not clearly identify which arguments in particular the author thinks are deficient in the ‘bundle’. Like Torres, I would take issue with reasoning like this if I saw it in the wild :

the biggest tragedy of an AGI apocalypse wouldn’t be the 8 billion deaths of people now living. This would be bad, for sure, but much worse would be the nonbirth of trillions and trillions of future people who would have otherwise existed.

Two notes here. Firstly, [citation needed]. Secondly, what even is the critique here? If the action to take to prevent a disaster is the same either way, do we care? If you save my life, I will not ask you to show your working to make sure you did it for the correct philosophical reason. Is taking AI risk seriously the bad bit? Or is it the details of the moral justification that we should be worried about? The question seems in any case moot; AFAICT most of the people in the TESCREAL acronym are not committed longtermists, and would probably give Torres a more palatable reason for wishing to save their (Torres’) life than they (Torres) seem to expect, that probably does not involve anyone in remote galaxies in distant futures.

Don’t get me wrong, it can be important to note what uses are made of philosophies by movements. Furthermore, movements are hijacked by bad actors all the time (which is to say, actors whose ends may have little to do with the stated philosophies of the movement), and it is important to be aware of that too, for any given movement.

I’m not sure if that is happening. For now, the major outcomes of naming TESCREALism seems to be giving reactionary accelerationists a new word to troll their opponents with. As Torres points out, for a while there, arch accelerationist Marc Andreesen’s Twitter Bio trolled his opponents by appropriating TESCREAL, amongst other terms:

Technology brother; AI accelerationist; GPU supremacist; shoggoth disciple; cyberpunk activist; embracer of variance; TESCREAList. Let it rip!

FWIW, I think there are dangerous philosophies in the various movements name-checked in the TESCREAL acronym. Some flavours of longtermism (e.g. the ones that apparently allow an unlimited budget of suffering now against badly quantified futures) seem to me pretty undercooked. Blanket accelerationism is IMO dangerous. Those are both relatively niche positions, however, and I don’t think they are the main thrust of the movements in question.

That said, if the goal is to have a banner to rally the troops under, complaining about TESCREALists is one, a Schelling point for creating an “_anti-_TESCREAL movement”. That which might actually be a coherent thing, or at least a more coherent thing than that thing it is criticising.

I’m not into that game of building movement by creating enemies myself. It has been said, “the right looks for converts, the left looks for heretics”. Insofar as TESCREAL does its best to make heretics rather than converts, it is not my game.

I could be on the wrong side of history, however. We live in a populist age in which uniting against a common enemy might be more important than the enemy existing. There are many implied demands that the anti-TESCREALists make, about corporate accountability, about equity for workers involved in the AI industry, and against unbridled tech-accelerationism, that I also make myself. I am not sure that any of those are dependent upon the TESCREAL bundle “really” existing, or shadowy cabals of charities coordinating, or the existence of an underlying coherent opposed philosophy. But maybe all those causes can be advanced if we can bring out the base by mobilising them against something. Manufacturing conspiracies against me may be how I should be pushing my agenda, as should we all. Maybe that is for the greater good, and the truth does not prevent suffering as effectively as the right people winning. If so, by the remorseless calculus of utilitarianism, we must create new enemies so that we may win the day.

1 References