Technological singularities

Incorporating hard AI take-offs, game-over high scores, the technium, deus-ex-machina, deus-ex-nube, nerd raptures and so forth



Small notes on the Rapture of the Nerds. If AI keeps on improving, will explosive intelligence eventually cut humans out of the loop and go on without us? Will we be pensioned in that case?

The internet has opinions about this.

A fruitful application of these ideas is in producing interesting science fiction and contemporary horror.

It is a shibboleth for the Rationalist community to express the opinion that the risks of a possible AI explosion are under-managed compared to the risks of more literal explosions. Also to wonder if an AI singularity happened and we are merely simulated by it.

I contend that managing e.g. climate crisis is on the critical path to even getting to hard AI takeoff and we are not managing that risk well enough to get to the more exciting hard AI risks, so whether which one we are failing to manage worse seems to me to be not so interesting.

Models of AGI

Huttere’s UAI

References

Acemoglu, Daron, David Autor, Jonathon Hazell, and Pascual Restrepo. 2020. β€œAI and Jobs: Evidence from Online Vacancies.” Working Paper 28257. National Bureau of Economic Research.
Acemoglu, Daron, and Pascual Restrepo. 2018. β€œArtificial Intelligence, Automation and Work.” Working Paper 24196. National Bureau of Economic Research.
β€”β€”β€”. 2020. β€œThe Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand.” Cambridge Journal of Regions, Economy and Society 13 (1): 25–35.
Birhane, Abeba, and David J. T. Sumpter. 2022. β€œThe Games We Play: Critical Complexity Improves Machine Learning.” arXiv.
Chollet, FranΓ§ois. 2019. β€œOn the Measure of Intelligence.” arXiv:1911.01547 [Cs], November.
Collison, Patrick, and Michael Nielsen. 2018. β€œScience Is Getting Less Bang for Its Buck.” The Atlantic, November 16, 2018.
Everitt, Tom, and Marcus Hutter. 2018. β€œUniversal Artificial Intelligence: Practical Agents and Fundamental Challenges.” In Foundations of Trusted Autonomy, edited by Hussein A. Abbass, Jason Scholz, and Darryn J. Reid, 117:15–46. Cham: Springer International Publishing.
Hildebrandt, Mireille. 2020. β€œSmart Technologies.” Internet Policy Review 9 (4).
Hutson, Matthew. 2022. β€œTaught to the Test.” Science 376 (6593): 570–73.
Hutter, Marcus. 2000. β€œA Theory of Universal Artificial Intelligence Based on Algorithmic Complexity.” arXiv.
Mitchell, Melanie. 2021. β€œWhy AI Is Harder Than We Think.” arXiv:2104.12871 [Cs], April.
Nathan, Christopher, and Keith Hyams. 2021. β€œGlobal Policymakers and Catastrophic Risk.” Policy Sciences, December.
Philippon, Thomas. 2022. β€œAdditive Growth.” Working Paper. Working Paper Series. National Bureau of Economic Research.
Russell, Stuart. 2019. Human Compatible: Artificial Intelligence and the Problem of Control. Penguin Books.
Scott, Kevin. 2022. β€œI Do Not Think It Means What You Think It Means: Artificial Intelligence, Cognitive Work & Scale.” American Academy of Arts & Sciences, 2022.
Silver, David, Satinder Singh, Doina Precup, and Richard S. Sutton. 2021. β€œReward Is Enough.” Artificial Intelligence 299 (October): 103535.
Sunehag, Peter, and Marcus Hutter. 2013. β€œPrinciples of Solomonoff Induction and AIXI.” In Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence: Papers from the Ray Solomonoff 85th Memorial Conference, Melbourne, VIC, Australia, November 30 – December 2, 2011, edited by David L. Dowe, 386–98. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer.
Wong, Michael L., and Stuart Bartlett. 2022. β€œAsymptotic Burnout and Homeostatic Awakening: A Possible Solution to the Fermi Paradox?” Journal of The Royal Society Interface 19 (190): 20220029.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.