Technological singularities

Incorporating hard AI take-offs, game-over high scores, the technium, god-in-the-cloud, deus-ex-machina, nerd raptures and so forth

Small notes on the Rapture of the Nerds. If AI keeps on improving, will explosive intelligence eventually cut humans out of the loop and go on without us? Will we be pensioned in that case?

The internet has opinions about this.

A fruitful application of these ideas is in producing interesting science fiction and contemporary horror.

It is a shibboleth for the Rationalist community to express the opinion that the risks of a possible AI explosion are under-managed compared to the risks of more literal explosions. Also to wonder if an AI singularity happened and we are merely simulated by it.

I contend that managing e.g. climate crisis is on the critical path to even getting to hard AI takeoff and we are not managing that risk well enough to get to the more exciting hard AI risks, so whether which one we are failing to manage worse seems to me to be not so interesting.


Acemoglu, Daron, David Autor, Jonathon Hazell, and Pascual Restrepo. 2020. “AI and Jobs: Evidence from Online Vacancies.” Working Paper 28257. National Bureau of Economic Research.
Acemoglu, Daron, and Pascual Restrepo. 2018. “Artificial Intelligence, Automation and Work.” Working Paper 24196. National Bureau of Economic Research.
———. 2020. “The Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand.” Cambridge Journal of Regions, Economy and Society 13 (1): 25–35.
Chollet, François. 2019. “On the Measure of Intelligence.” arXiv:1911.01547 [cs], November.
Collison, Patrick, and Michael Nielsen. 2018. “Science Is Getting Less Bang for Its Buck.” The Atlantic, November 16, 2018.
Mitchell, Melanie. 2021. “Why AI Is Harder Than We Think.” arXiv:2104.12871 [cs], April.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.