Superintelligence

Incorporating technological singularities, hard AI take-offs, game-over high scores, the technium, deus-ex-machina, deus-ex-nube, AI supremacy, nerd raptures and so forth

December 2, 2016 — October 18, 2024

adversarial
catastrophe
economics
faster pussycat
innovation
language
machine learning
mind
neural nets
NLP
security
technology
Figure 1: Go on, buy the sticker

Small notes on the Rapture of the Nerds. If AI keeps on improving, will explosive intelligence eventually cut humans out of the loop and go on without us? Also, crucially, would we be pensioned in that case?

The internet has opinions about this.

A fruitful application of these ideas is in producing interesting science fiction and contemporary horror. I would like there to be other fruitful applications, but as they are, they are all so far much more speculative.

Figure 2

1 Safety, risks

See AI Safety.

2 WTF is TESCREALism?

An article has gone viral in my circles recently denouncing TESCREALism. There is a lot going on there, but the main subject of criticism seems to be that some flavours of longtermism lead to unpalatable conclusions, including excessive worry about AI x-risk at the expense of the currently-living. It goes on to frame several online communities which have entertained various longtermist ideas to be a “bundle”, which I assume is intended to imply that these groups form a political bloc which encourages or enables accelerationist hypercapitalism.

I am not a fan of TESCREALism as a term.

For one thing, the article leans on genealogical arguments, mostly guilt by association. Many of the movements it names have no single view on the topic of AI x-risk, longtermism, or even the future in general, not a consistent utilitarian stance, nor a consistent interpretation of utilitarianism when they are utilitarian. We could draw a murder-board upon which key individuals in each of them are connected by red string, but it doesn’t seem to be a strongly natural category. Anyway, just because the associations do not seem meaningful to me that doesn’t mean that the arguments they share might not be bad.

However, the article is not good at identifying what arguments in particular the author thinks are deficient in the bundle. I think it is some longtermism themes? If I disagree with some version of e.g. longtermism, why not just say I disagree with that? Better yet, why not mention which of the many longtermisms I am worried about?

The effect to me is that the main critique is a large assortment of people with whom the author disagrees in different ways and who disagree with each other in different ways, are outgroup in a vaguely-specified nefarious alignment with dark forces. The muddier strategy of the article, disagreeing-with-longtermism-plus-feeling-bad-vibes-about-various-other-movements-and-philosophies-that-have-a-diverse-range-of-sometimes-tenuous-relationships-with-longtermism, doesn’t feel like it is making TESCREALism do useful work as a unit of analysis.

I saw this guilt-by-association play out in public discourse previously with “neoliberalism”, and probably the criticisms of the “woke” “movement” are doing the same thing. Since reading this, I have become worried that I am making the same mistake myself when talking about neoreactionaries. As such, I am grateful to the authors for making me interrogate my own prejudices, although I suspect that if anything, I have been shifted in the opposite direction than they intended.

Don’t get me wrong, it is important to see what uses are made of philosophies by movements. Further, movements are hijacked by bad actors all the time (which is to say, actors whose ends may have little to do with the stated goals of the movement), and it is important to be aware of that. Analysis of those important dynamics is typically best done by reducing them to their component parts, not gerrymandering them together.

If “TESCREALists” are functioning as a bloc, then… by all means, analyse this. I think that signatories to some components of the acronym do indeed function as a bloc from time to time (cf rationalists and effective altruists).

Broadly, I am not convinced there is a movement to hijack in this acronym, just some occasional correlations. Cosmism and effective altruism are not in correspondence with each other, not least because all the Cosmists are dead.

I made a meal of that, didn’t I? Many of my colleagues have been greatly taken by dismissing things as TESCREALism of late, so I think it needs mentioning.

I’m vaguely baffled by the whole thing though, and wondering how much mileage I can get out of lumping all the movements that have grated me the wrong way in the past together into a single acronym. (“Let me tell you about NIMBYs, coal magnates, and liberal scolds, three-chord punk, and how they are all part of the same movement, which I will call NICOLSP.”)

3 In historical context

Figure 3

More filed under big history.

3.1 Most-important century model

4 Models of AGI

Figure 4: I cannot even remember where I got this

5 Technium stuff

More to say here; perhaps later.

6 Aligning AI

Let us consider general alignment, because I have little AI-specific to say yet.

7 Constraints

7.1 Compute methods

We are getting very good at efficiently using hardware (Grace 2013). AI and efficiency (Hernandez and Brown 2020) makes this clear:

We’re releasing an analysis showing that since 2012 the amount of compute needed to train a neural net to the same performance on ImageNet classification has been decreasing by a factor of 2 every 16 months. Compared to 2012, it now takes 44 times less compute to train a neural network to the level of AlexNet (by contrast, Moore’s Law would yield an 11x cost improvement over this period). Our results suggest that for AI tasks with high levels of recent investment, algorithmic progress has yielded more gains than classical hardware efficiency.

See also

7.2 Compute hardware

TBD

8 Omega point etc

Surely someone has noticed the poetical similarities to the idea of noösphere/Omega point. I will link to that when I discover something well-written enough.

Q: Did anyone think that the noösphere would fit on a consumer hard drive?

“Hi there, my everyday carry is the sum of human knowledge.”

9 Incoming

Figure 5: Tom Gauld

10 References

Acemoglu, Autor, Hazell, et al. 2020. AI and Jobs: Evidence from Online Vacancies.” Working Paper 28257.
Acemoglu, and Restrepo. 2018. Artificial Intelligence, Automation and Work.” Working Paper 24196.
———. 2020. The Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand.” Cambridge Journal of Regions, Economy and Society.
Birhane, and Sumpter. 2022. The Games We Play: Critical Complexity Improves Machine Learning.”
Bostrom. 2014. Superintelligence: Paths, Dangers, Strategies.
Bubeck, Chandrasekaran, Eldan, et al. 2023. Sparks of Artificial General Intelligence: Early Experiments with GPT-4.”
Chalmers. 2016. The Singularity.” In Science Fiction and Philosophy.
Chollet. 2019. On the Measure of Intelligence.” arXiv:1911.01547 [Cs].
Collison, and Nielsen. 2018. Science Is Getting Less Bang for Its Buck.” The Atlantic.
Donoho. 2023. Data Science at the Singularity.”
Efferson, Richerson, and Weinberger. 2023. Our Fragile Future Under the Cumulative Cultural Evolution of Two Technologies.” Philosophical Transactions of the Royal Society B: Biological Sciences.
Everitt, and Hutter. 2018. Universal Artificial Intelligence: Practical Agents and Fundamental Challenges.” In Foundations of Trusted Autonomy.
Grace. 2013. Algorithmic Progress in Six Domains.”
Grace, Salvatier, Dafoe, et al. 2018. Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts.” Journal of Artificial Intelligence Research.
Grace, Stewart, Sandkühler, et al. 2024. Thousands of AI Authors on the Future of AI.”
Hanson. 2016. The Age of Em: Work, Love, and Life when Robots Rule the Earth.
Harari. 2018. Homo Deus: A Brief History of Tomorrow.
Hawkins. 2021. A Thousand Brains: A New Theory of Intelligence.
Hernandez, and Brown. 2020. Measuring the Algorithmic Efficiency of Neural Networks.”
Hildebrandt. 2020. Smart Technologies.” Internet Policy Review.
Hutson. 2022. Taught to the Test.” Science.
Hutter. 2000. A Theory of Universal Artificial Intelligence Based on Algorithmic Complexity.”
———. 2005. Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability. Texts in Theoretical Computer Science.
———. 2007. Universal Algorithmic Intelligence: A Mathematical Top→Down Approach.” In Artificial General Intelligence.
———. 2012. “Can Intelligence Explode?” Journal of Consciousness Studies.
Hutter, Quarel, and Catt. 2024. An Introduction to Universal Artificial Intelligence.
Jeon, and Van Roy. 2024. Information-Theoretic Foundations for Machine Learning.”
Johansen, and Sornette. 2001. Finite-Time Singularity in the Dynamics of the World Population, Economic and Financial Indices.” Physica A: Statistical Mechanics and Its Applications.
Lee. 2020a. Coevolution.” In The Coevolution: The Entwined Futures of Humans and Machines.
———. 2020b. The Coevolution: The Entwined Futures of Humans and Machines.
Legg. 2008. Machine Super Intelligence.”
Legg, and Hutter. 2007. Universal Intelligence: A Definition of Machine Intelligence.” Minds and Machines.
Manheim, and Garrabrant. 2019. Categorizing Variants of Goodhart’s Law.”
Mitchell. 2021. Why AI Is Harder Than We Think.” arXiv:2104.12871 [Cs].
Nathan, and Hyams. 2021. Global Policymakers and Catastrophic Risk.” Policy Sciences.
Ngo, Chan, and Mindermann. 2024. The Alignment Problem from a Deep Learning Perspective.”
Omohundro. 2008. The Basic AI Drives.” In Proceedings of the 2008 Conference on Artificial General Intelligence 2008: Proceedings of the First AGI Conference.
Philippon. 2022. Additive Growth.” Working Paper. Working Paper Series.
Russell. 2019. Human Compatible: Artificial Intelligence and the Problem of Control.
Sastry, Heim, Belfield, et al. n.d. “Computing Power and the Governance of Artificial Intelligence.”
Scott. 2022. I Do Not Think It Means What You Think It Means: Artificial Intelligence, Cognitive Work & Scale.” American Academy of Arts & Sciences.
Silver, Singh, Precup, et al. 2021. Reward Is Enough.” Artificial Intelligence.
Sornette. 2003. Critical Market Crashes.” Physics Reports.
Sunehag, and Hutter. 2013. Principles of Solomonoff Induction and AIXI.” In Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence: Papers from the Ray Solomonoff 85th Memorial Conference, Melbourne, VIC, Australia, November 30 – December 2, 2011. Lecture Notes in Computer Science.
Wong, and Bartlett. 2022. Asymptotic Burnout and Homeostatic Awakening: A Possible Solution to the Fermi Paradox? Journal of The Royal Society Interface.
Zenil, Tegnér, Abrahão, et al. 2023. The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence.”
Zhang, Zhu, Saphra, et al. 2024. Transcendence: Generative Models Can Outperform The Experts That Train Them.”