Incorporating technological singularities, hard AI take-offs, game-over high scores, the technium, deus-ex-machina, deus-ex-nube, AI supremacy, nerd raptures and so forth

December 2, 2016 — May 7, 2024

faster pussycat
machine learning
neural nets
Figure 1

Small notes on the Rapture of the Nerds. If AI keeps on improving, will explosive intelligence eventually cut humans out of the loop and go on without us? Also, crucially, would we be pensioned in that case?

The internet has opinions about this.

A fruitful application of these ideas is in producing interesting science fiction and contemporary horror.

1 x-risk

Incorporating other badness risk.

It is a shibboleth for the rationalist community to express the opinion that the risks of a possible AI explosion are under-managed compared to the risks of more literal explosions. Also, to wonder if an AI singularity happened and we are merely simulated by it.

There is a possibility that managing e.g. climate crisis is on the critical path to AI takeoff, and we are not managing that risk well; in particular I think that we are not managing its tail risks at all well.

I would like to write some wicked tail risk theory at some point.

2 In historical context

Figure 2

More filed under big history.

2.1 Most-important century model

3 Models of AGI

Figure 3: I cannot even remember where I got this

4 Technium stuff

More to say here; perhaps later.

5 Aligning AI

Let us consider general alignment, because I have little AI-specific to say.

6 Constraints

6.1 Compute methods

We are getting very good at efficiently using hardware (Grace 2013). AI and efficiency (Hernandez and Brown 2020) makes this clear:

We’re releasing an analysis showing that since 2012 the amount of compute needed to train a neural net to the same performance on ImageNet classification has been decreasing by a factor of 2 every 16 months. Compared to 2012, it now takes 44 times less compute to train a neural network to the level of AlexNet (by contrast, Moore’s Law would yield an 11x cost improvement over this period). Our results suggest that for AI tasks with high levels of recent investment, algorithmic progress has yielded more gains than classical hardware efficiency.

See also

6.2 Compute hardware


7 Incoming

Figure 4: Tom Gauld

8 References

Acemoglu, Autor, Hazell, et al. 2020. AI and Jobs: Evidence from Online Vacancies.” Working Paper 28257.
Acemoglu, and Restrepo. 2018. Artificial Intelligence, Automation and Work.” Working Paper 24196.
———. 2020. The Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand.” Cambridge Journal of Regions, Economy and Society.
Birhane, and Sumpter. 2022. The Games We Play: Critical Complexity Improves Machine Learning.”
Bostrom. 2014. Superintelligence: Paths, Dangers, Strategies.
Bubeck, Chandrasekaran, Eldan, et al. 2023. Sparks of Artificial General Intelligence: Early Experiments with GPT-4.”
Chalmers. 2016. The Singularity.” In Science Fiction and Philosophy.
Chollet. 2019. On the Measure of Intelligence.” arXiv:1911.01547 [Cs].
Collison, and Nielsen. 2018. Science Is Getting Less Bang for Its Buck.” The Atlantic.
Donoho. 2023. Data Science at the Singularity.”
Everitt, and Hutter. 2018. Universal Artificial Intelligence: Practical Agents and Fundamental Challenges.” In Foundations of Trusted Autonomy.
Grace. 2013. Algorithmic Progress in Six Domains.”
Grace, Salvatier, Dafoe, et al. 2018. Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts.” Journal of Artificial Intelligence Research.
Harari. 2018. Homo Deus: A Brief History of Tomorrow.
Hernandez, and Brown. 2020. Measuring the Algorithmic Efficiency of Neural Networks.”
Hildebrandt. 2020. Smart Technologies.” Internet Policy Review.
Hutson. 2022. Taught to the Test.” Science.
Hutter. 2000. A Theory of Universal Artificial Intelligence Based on Algorithmic Complexity.”
Johansen, and Sornette. 2001. Finite-Time Singularity in the Dynamics of the World Population, Economic and Financial Indices.” Physica A: Statistical Mechanics and Its Applications.
Lee. 2020a. 14 COEVOLUTION.” In The Coevolution: The Entwined Futures of Humans and Machines.
———. 2020b. The Coevolution: The Entwined Futures of Humans and Machines.
Manheim, and Garrabrant. 2019. Categorizing Variants of Goodhart’s Law.”
Mitchell. 2021. Why AI Is Harder Than We Think.” arXiv:2104.12871 [Cs].
Nathan, and Hyams. 2021. Global Policymakers and Catastrophic Risk.” Policy Sciences.
Omohundro. 2008. The Basic AI Drives.” In Proceedings of the 2008 Conference on Artificial General Intelligence 2008: Proceedings of the First AGI Conference.
Philippon. 2022. Additive Growth.” Working Paper. Working Paper Series.
Russell. 2019. Human Compatible: Artificial Intelligence and the Problem of Control.
Sastry, Heim, Belfield, et al. n.d. “Computing Power and the Governance of Artificial Intelligence.”
Scott. 2022. I Do Not Think It Means What You Think It Means: Artificial Intelligence, Cognitive Work & Scale.” American Academy of Arts & Sciences.
Silver, Singh, Precup, et al. 2021. Reward Is Enough.” Artificial Intelligence.
Sunehag, and Hutter. 2013. Principles of Solomonoff Induction and AIXI.” In Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence: Papers from the Ray Solomonoff 85th Memorial Conference, Melbourne, VIC, Australia, November 30 – December 2, 2011. Lecture Notes in Computer Science.
Wong, and Bartlett. 2022. Asymptotic Burnout and Homeostatic Awakening: A Possible Solution to the Fermi Paradox? Journal of The Royal Society Interface.