Small notes on the Rapture of the Nerds. If AI keeps on improving, will explosive intelligence eventually cut humans out of the loop and go on without us? Will we be pensioned in that case?
The internet has opinions about this.
- Matthe Hutson, Computers ace IQ tests but still make dumb mistakes. Can different tests help?
- François Chollet, The implausibility of intelligence explosion
- Is Science Stagnant?
- Ground zero, perhaps, Vernor Vinge’s The Coming Technological Singularity
- Stuart Russell on Making Artificial Intelligence Compatible with Humans, and interview on various themes in his book (Russell 2019)
- Attempted Gears Analysis of AGI Intervention Discussion With Eliezier
- Superintelligence: The Idea That Eats Smart People
- Kevin SCott argues for trying to find a unifying notion of what knowledge work is to unify what humans and machines can do. (Scott 2022)
- Hildebrandt (2020) argues for talking about smart tech instead of AI tech.
- Everyone love’s Bart Selman’s AAAI Presidential Address: The State of AI
- Asymptotic burnout and homeostatic awakening: a possible solution to the Fermi paradox?
A fruitful application of these ideas is in producing interesting science fiction and contemporary horror.
It is a shibboleth for the Rationalist community to express the opinion that the risks of a possible AI explosion are under-managed compared to the risks of more literal explosions. Also to wonder if an AI singularity happened and we are merely simulated by it.
I contend that managing e.g. climate crisis is on the critical path to even getting to hard AI takeoff and we are not managing that risk well enough to get to the more exciting hard AI risks, so whether which one we are failing to manage worse seems to me to be not so interesting.
No comments yet. Why not leave one?