Spamularity and dark forest internet

The infosphere after people

October 5, 2015 — November 20, 2024

AI
confidentiality
adversarial
catastrophe
computers are awful together
confidentiality
democracy
economics
evolution
game theory
incentive mechanisms
networks
P2P
social graph
virality
wonk
Figure 1: Hmm let me see; devastate, develop or devour?

Fake news, credibility, verification of provenance, etc. TBC. On the arms race in pretending to be alive.

Candidate for merging with the misinformation notebook?

1 Dark forest

Maggie Appleton’s commentary on Dark Forest theory of the Internet by Yancey Strickler:

The dark forest theory of the web points to the increasingly life-like but life-less state of being online. Most open and publicly available spaces on the web are overrun with bots, advertisers, trolls, data scrapers, clickbait, keyword-stuffing “content creators,” and algorithmically manipulated junk.

It’s like a dark forest that seems eerily devoid of human life—all the living creatures are hidden beneath the ground or up in trees. If they reveal themselves, they risk being attacked by automated predators.

Humans who want to engage in informal, unoptimised, personal interactions have to hide in closed spaces like invite-only Slack channels, Discord groups, email newsletters, small-scale blogs, and digital gardens. Or make themselves illegible and algorithmically incoherent in public venues.

I feel like I’m going to lose this battle, but for the record, I do not love the term “textpocalypse”.

2 Spamularity

Charlie Stross’s 2010 Spamularity stuck with me:

We are currently in the early days of an arms race, between the spammers and the authors of spam filters. The spammers are writing software to generate personalized, individualized wrappers for their advertising payloads that masquerade as legitimate communications. The spam cops are writing filters that automate the process of distinguishing a genuinely interesting human communication from the random effusions of a ’bot. And with each iteration, the spam gets more subtly targeted, and the spam filters get better at distinguishing human beings from software, in a bizarre parody of the imitation game popularized by Alan Turing (in which a human being tries to distinguish between another human being and a piece conversational software via textual communication) — an early ad hoc attempt to invent a pragmatic test for artificial intelligence.

We have one faction that is attempting to write software that can generate messages that can pass a Turing test, and another faction that is attempting to write software that can administer an ad-hoc Turing test. Each faction has a strong incentive to beat the other. This is the classic pattern of an evolutionary predator/prey arms race: and so I deduce that if symbol-handling, linguistic artificial intelligence is possible at all, we are on course for a very odd destination indeed—the Spamularity, in which those curious lumps of communicating meat give rise to a meta-sphere of discourse dominated by parasitic viral payloads pretending to be meat…

In the The Economics of Spam, Bruce Schneier argues, based on Kanich et al. (2008) that spam is probably optimized for traffic, not conversion, i.e. quantity over quality, at least at that time. I suspect that to be true of email spam at the time, but not the Dark Forest Internet of today.

Sam Kriss calls the spamularity the language of god:

What is machine language? Firstly, machine language is vampiric, shamanic, xenophagic, mocking. It’s a changeling. Often it tries to imitate human discourse; the machine wants you to think that it’s human. This is the first level of deception. Often this isn’t enough: machines will use various methods to take over other text-producing systems, so that without your knowledge you end up advertising weight loss pills to all your old school friends. First axiom: all language has the potential to become machine language. To become infected. 10 Award-Winning GIFs That Will Leave You Wanting More. I Could Watch #4 For Days. This is the second level of deception. In the third level of deception, the machine convinces itself that it has a physically extended body, that it has an independent mind, that it really wants to produce the text it generates. This might happen very soon. It might have already happened, somewhere on a dusty plain in western Africa, somewhere that never really existed, tens of thousands of years ago.

3 Incoming

4 References

Jaidka, Chen, Chesterman, et al. 2024. Misinformation, Disinformation, and Generative AI: Implications for Perception and Policy.” Digit. Gov.: Res. Pract.
Kanich, Kreibich, Levchenko, et al. 2008. Spamalytics: An Empirical Analysis of Spam Marketing Conversion.” In Proceedings of the 15th ACM Conference on Computer and Communications Security. CCS ’08.
Messeri, and Crockett. 2024. Artificial Intelligence and Illusions of Understanding in Scientific Research.” Nature.
Rao, and Reiley. 2012. The Economics of Spam.” Journal of Economic Perspectives.
Shumailov, Shumaylov, Zhao, et al. 2023. The Curse of Recursion: Training on Generated Data Makes Models Forget.”