Catastrophic risk
All your base are belong to dust
April 30, 2015 — November 9, 2024
This is a mere placeholder, although I should have more, since it was the central concern of the Chair of Entrepreneurial Risks where I did my MSc under the supervision of Didier Sornette.
Placeholder for discussing catastrophic risks. Much-discussed examples include some AI risks, some climate change outcomes, nuclear war, sufficiently bad epidemics… Normal accidents at an existential scale.
I have been persuaded to revisit this theme at the Australian AI Safety Forum.
There are many definitions of what constitutes a catastrophic risk. I like to think of them as “risks I cannot even in principle manage via an insurance policy”; If a risk is bad enough, I think there is a good likelihood that the insurance company won’t even exist to pay out my claim when the risk occurs, possibly because the economy collapsed, or the planet was cooked into an uninhabitable cinder. Such risks are catastrophic.
There are other definitions; for example, in ecosystems, a catastrophe is an event that induces a regime change (after the catastrophe, you are in a different ecosystem than you were before).
1 On evaluating risks where we only get to see them once
How do we manage risks of very bad stuff that are beyond precise statistical quantification? Black swans etc.
2 Institutions
3 For policy
See science for policy.
4 In financial markets
5 Prediction via markets
See prediction markets.
6 Incoming
Nassim Taleb has a whole career based on handling heavy-tailed risk and managing out-of-sample downsides (Taleb 2007, 2020).
Eliezer Yudkowsky, Moore’s Law of Mad Science:
Every eighteen months, the minimum IQ necessary to destroy the world drops by one point.
Fun but implausibly simple differential equation models of civilisational implosion.