AI Safety
Getting ready for the grown-ups to arrive
October 31, 2024 — October 31, 2024
Forked from superintelligence because the risk mitigation strategies are a field in themselves. Or rather, several distinct fields, which I need to map out in this notebook.
1 X-risk
x-risk is a term used in, e.g. the rationalist community to discuss risks of a possible AI explosion.
FWIW: I personally think that (various kinds of) AI x-risk are plausible, and serious enough to worry about, even if they are not the most likely option. If the possibility is that everyone dies, then we should be worried about it, even if it is only a 1% chance.
I would like to write some wicked tail risk theory at some point.
2 X-risk risk
There are people who think that focusing on x-risk is itself a risky distraction from more pressing problems, especially accelerationists.
e.g. what if we do not solve the climate crisis because we put effort into the AI risks instead? Or so much effort that it slowed down the AI that could have saved us? Or so much effort that we got distracted from other more pressing risks?
Here is one piece that I found rather interesting: Superintelligence: The Idea That Eats Smart People (although I thought that effective altruism meta criticism was the idea that ate smart people).
Personally, I doubt these need to be zero-sum trade-offs. Getting the human species ready to deal with catastrophes in general seems like a feasible intermediate goal.
There is a currently-viral school of X-risk-risk critique that names X-risk as a concern of TESCREALism, which might be of interest to some readers.
2.1 Most-important century model
- Holden Karnofsky, The “most important century” blog post series
- Robert Wiblin’s analysis: This could be the most important century
3 Theoretical tools
3.1 SLT
Singular learning theory has been pitched to me as a tool with applications to AI safety.
3.2 Sparse AE
See Sparse Autoencoders for explanation have had a moment.
3.3 Algorithmic Game Theory
4 Aligning AI
Let us consider general alignment, because I have little AI-specific to say yet.
5 Incoming
AiSafety.com’s landscape map: https://aisafety.world/
Wong and Bartlett (2022)
we hypothesize that once a planetary civilization transitions into a state that can be described as one virtually connected global city, it will face an ‘asymptotic burnout’, an ultimate crisis where the singularity-interval time scale becomes smaller than the time scale of innovation. If a civilization develops the capability to understand its own trajectory, it will have a window of time to affect a fundamental change to prioritize long-term homeostasis and well-being over unyielding growth—a consciously induced trajectory change or ‘homeostatic awakening’. We propose a new resolution to the Fermi paradox: civilizations either collapse from burnout or redirect themselves to prioritising homeostasis, a state where cosmic expansion is no longer a goal, making them difficult to detect remotely.
Ten Hard Problems in and around AI
We finally published our big 90-page intro to AI. Its likely effects, from ten perspectives, ten camps. The whole gamut: ML, scientific applications, social applications, access, safety and alignment, economics, AI ethics, governance, and classical philosophy of life.
The follow-on 2024 Survey of 2,778 AI authors: six parts in pictures
Douglas Hofstadter changes his mind on Deep Learning & AI risk
François Chollet, The implausibility of intelligence explosion
Stuart Russell on Making Artificial Intelligence Compatible with Humans, an interview on various themes in his book (Russell 2019)
Attempted Gears Analysis of AGI Intervention Discussion With Eliezer
Kevin Scott argues for trying to find a unifying notion of what knowledge work is to unify what humans and machines can do (Scott 2022).