Living with risk


David Spiegelhalter, Survival by arrows

Risk, perception of. How humans make judgements under uncertainty, as actors able to form beliefs about the future.

Failures of reasoning under uncertainty

Scott Alexander,A failure bit not of prediction

And if the risk was 10%, shouldn't that have been the headline. "TEN PERCENT CHANCE THAT THERE IS ABOUT TO BE A PANDEMIC THAT DEVASTATES THE GLOBAL ECONOMY, KILLS HUNDREDS OF THOUSANDS OF PEOPLE, AND PREVENTS YOU FROM LEAVING YOUR HOUSE FOR MONTHS"? Isn't that a better headline than Coronavirus panic sells as alarmist information spreads on social media? But that's the headline you could have written if your odds were ten percent!

So:

I think people acted like Goofus again.

People were presented with a new idea: a global pandemic might arise and change everything. They waited for proof. The proof didn't arise, at least at first. I remember hearing people say things like "there's no reason for panic, there are currently only ten cases in the US". This should sound like "there's no reason to panic, the asteroid heading for Earth is still several weeks away". The only way I can make sense of it is through a mindset where you are not allowed to entertain an idea until you have proof of it. Nobody had incontrovertible evidence that coronavirus was going to be a disaster, so until someone does, you default to the null hypothesis that it won't be.

Gallant wouldn't have waited for proof. He would have checked prediction markets and asked top experts for probabilistic judgments. If he heard numbers like 10 or 20 percent, he would have done a cost-benefit analysis and found that putting some tough measures into place, like quarantine and social distancing, would be worthwhile if they had a 10 or 20 percent chance of averting catastrophe.

Fat tailed risks

🏗

Nassim Nicholas Taleb, On single point forecasts for fat tailed variables

Micromorts

David Spiegelhalter and Mike Pearson, Understanding uncertainty: Small but lethal

A Micromort. is a 1 in a million chance of dying from misadventure. It has a natural interpretation:

We can also consider the 18,000 people who died from “external causes” in England and Wales in 2008. That’s those people out of the total population of 54 million who died from accidents, murders, suicides and so on (see the Office of National Statistics website for more information). This corresponds to an average of

\[ 18,000 / (54 \times 365) \approx 1 \]

micromort per day, so we can think of a micromort as the average “ration” of lethal risk that people spend each day, and which we do not unduly worry about.

Micromorts are also being used to compare death risks in different countries.

Microlives

A microlife is a measure of actuarial life risk, telling you how much of your statistical life expectancy (from all causes) that you are burning through.

He also discusses :

many risks we take don’t kill you straight away: think of all the lifestyle frailties we get warned about, such as smoking, drinking, eating badly, not exercising and so on. The microlife aims to make all these chronic risks comparable by showing how much life we lose on average when we’re exposed to them:

a microlife is 30 minutes of your life expectancy

Spiegelhalter is a wizard of risk communication. For example, David Spiegelhalter gives this perspective on COVID-19:

So, roughly speaking, we might say that getting COVID-19 is like packing a year’s worth of risk into a week or two.

Heuristic and biases in risk perception

Trying to work out how people make decisions and whether it is

  • probabilistic
  • rational
  • optimal
  • manipulable

and what definitions of each of those words would be required to make this work.

Economists often seem to really want the relationship between people’s empirical behaviour in the face of risk and the decision-theoretically-optimal strategy to be simple, in the sense of having a mathematically simple relationship. The evolutionarily plausible kind of simplicity of the idea that it is our messy lumpy Pleistocene brains worry about great white sharks more than loan sharks because the former have more teeth and other such approximate not-too-awful heuristics.

Assigned reading 1: The Beast Upon Your Shoulder, The Price Upon Your Head.

Assigned reading 2: Visualising risk in the NHS Breat cancer screening leaflet

Health information leaflets are designed for those with a reading age of 11, and similar levels of numeracy. But there is some reasonable evidence that people of low reading age and numeracy are less likely to take advantage of options for informed choice and shared decision making. So we are left with the conclusion -

Health information leaflets are designed for people who do not want to read the leaflets.

The classic Ellsberg paradox:

If you’re like most people, you don’t have a preference between B1 and W1, nor between B2 and W2. But most people prefer B1 to B2 and W1 to W2. That is, they prefer “the devil they know”: They’d rather choose the urn with the measurable risk than the one with unmeasurable risk.

This is surprising. The expected payoff from Urn 1 is $50. The fact that most people favor B1 to B2 implies that they believe that Urn 2 contains fewer black balls than Urn 1. But these people most often also favor W1 to W2, implying that they believe that Urn 2 also contains fewer white balls, a contradiction.

Ellsberg offered this as evidence of “ambiguity aversion,” a preference in general for known risks over unknown risks. Why people exhibit this preference isn’t clear. Perhaps they associate ambiguity with ignorance, incompetence, or deceit, or possibly they judge that Urn 1 would serve them better over a series of repeated draws.

The principle was popularized by RAND Corporation economist Daniel Ellsberg, of Pentagon Papers fame.

Rabin and Thaler (2001):

Using expected utility to explain anything more than economically negligible risk aversion over moderate stakes such as $10, $100, and even $1,000 requires a utility-of-wealth function that predicts absurdly severe risk aversion over very large stakes. Conventional expected utility theory is simply not a plausible explanation for many instances of risk aversion that economists study.

Ergodicity economics preferes not to regard this as a bias but as a learning strategy. How does that work then? I don’t know anything about it but it looks a little… cottage?

Mark Buchanan, How ergodicity reimagines economics for the benefit of us all:

The view of expected utility theory is that people should handle it by calculating the expected benefit to come from any possible choice, and choosing the largest. Mathematically, the expected ‘return’ from some choices can be calculated by summing up the possible outcomes, and weighting the benefits they give by the probability of their occurrence.

This inspired LML efforts to rewrite the foundations of economic theory, avoiding the lure of averaging over possible outcomes, and instead averaging over outcomes in time, with one thing happening after another, as in the real world.

I am guessing some kind of online learning type setting, minimising regret over possibly non-stationary data or something? 🤷‍♂

There is an introduction by Jason Collins which puts it in layperson terms.

Case, Donald O., James E. Andrews, J. David Johnson, and Suzanne L. Allard. 2005. “Avoiding Versus Seeking: The Relationship of Information Seeking to Avoidance, Blunting, Coping, Dissonance, and Related Concepts.” Journal of the Medical Library Association 93 (3): 353–62. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1175801/.

Cirillo, Pasquale, and Nassim Nicholas Taleb. 2020. “Tail Risk of Contagious Diseases.” Nature Physics 16 (6, 6): 606–13. https://doi.org/10.1038/s41567-020-0921-x.

Enke, Benjamin, and Thomas Graeber. 2019. “Cognitive Uncertainty.” Working Paper 26518. National Bureau of Economic Research. https://doi.org/10.3386/w26518.

Freeman, Alexandra L J. 2019. “How to Communicate Evidence to Patients.” Drug and Therapeutics Bulletin 57 (8): 119–24. https://doi.org/10.1136/dtb.2019.000008.

Gigerenzer, Gerd, and Daniel G. Goldstein. 1996. “Reasoning the Fast and Frugal Way: Models of Bounded Rationality.” Psychological Review 103 (4): 650–69. https://doi.org/10.1037/0033-295X.103.4.650.

Gigerenzer, Gerd, and Ulrich Hoffrage. 1995. “How to Improve Bayesian Reasoning Without Instruction: Frequency Formats.” Psychological Review 102 (4): 684–704. https://doi.org/10.1037/0033-295X.102.4.684.

Golman, Russell, David Hagmann, and George Loewenstein. 2017. “Information Avoidance.” Journal of Economic Literature 55 (1): 96–135. https://doi.org/10.1257/jel.20151245.

Golman, Russell, and George Loewenstein. 2015. “Curiosity, Information Gaps, and the Utility of Knowledge.” SSRN Scholarly Paper ID 2149362. Rochester, NY: Social Science Research Network. http://papers.ssrn.com/abstract=2149362.

Howell, Jennifer L., and James A. Shepperd. 2012. “Behavioral Obligation and Information Avoidance.” Annals of Behavioral Medicine 45 (2): 258–63. https://doi.org/10.1007/s12160-012-9451-9.

Kahneman, Daniel. 2003a. “A Psychological Perspective on Economics.” The American Economic Review 93 (2): 162–68. https://doi.org/10.1257/000282803321946985.

———. 2003b. “Maps of Bounded Rationality: Psychology for Behavioral Economics.” The American Economic Review 93 (5): 1449–75.

Kahneman, Daniel, and Amos Tversky. 1979. “Prospect Theory: An Analysis of Decision Under Risk.” Econometrica 47 (2): 263–92. http://www.nolle.se/wp-content/2011/11/Kahneman-Tversky-Prospect-theory79.pdf.

———. 1984. “Choices, Values, and Frames.” American Psychologist 39 (4): 341–50. https://doi.org/10.1037/0003-066X.39.4.341.

Köszegi, Botond, and Matthew Rabin. 2006. “A Model of Reference-Dependent Preferences.” Quarterly Journal of Economics 121 (4): 1133–65. https://doi.org/10.1162/qjec.121.4.1133.

Loomes, Graham, and Robert Sugden. 1982. “Regret Theory: An Alternative Theory of Rational Choice Under Uncertainty Graham Loomes; Robert Sugden.” Economic Journal 92 (368): 805–24.

Peters, Ole. 2019. “The Ergodicity Problem in Economics.” Nature Physics 15 (12, 12): 1216–21. https://doi.org/10.1038/s41567-019-0732-0.

Rabin, Matthew, and Richard H. Thaler. 2001. “Anomalies: Risk Aversion.” The Journal of Economic Perspectives 15 (1): 219–32. https://doi.org/10.2307/2696549.

Ruggeri, Kai, Sonia Alí, Mari Louise Berge, Giulia Bertoldo, Ludvig D. Bjørndal, Anna Cortijos-Bernabeu, Clair Davison, et al. 2020. “Replicating Patterns of Prospect Theory for Decision Under Risk.” Nature Human Behaviour, May, 1–12. https://doi.org/10.1038/s41562-020-0886-x.

Samuelson, Paul A. 1938. “A Note on the Pure Theory of Consumer’s Behaviour.” Economica 51. https://doi.org/10.2307/2548836.

Sedlmeier, and Gerd Gigerenzer. 2001. “Teaching Bayesian Reasoning in Less Than Two Hours.” J Exp Psychol Gen 130: 380–400.

Sharpe, Keiran. 2015. “On the Ellsberg Paradox and Its Extension by Machina.” SSRN Scholarly Paper ID 2630471. Rochester, NY: Social Science Research Network. http://papers.ssrn.com/abstract=2630471.

Simon, HA. 1956. “Rational Choice and the Structure of the Environment.” Psychological Review 63 (2): 129–38. https://doi.org/10.1037/h0042769.

Simon, Herbert A. 1975. “Style in Design.” Spatial Synthesis in Computer-Aided Building Design 9: 287–309. http://edra.org/sites/default/files/publications/EDRA02-Simon-1-10_0.pdf.

———. 1958. “"The Decision-Making Schema": A Reply.” Public Administration Review 18 (1): 60–63. https://doi.org/10.2307/973736.

Sloman, Steven A., and Philip Fernbach. 2017. The Knowledge Illusion: Why We Never Think Alone. New York: Riverhead Books.

Taleb, Nassim Nicholas. 2020. “On the Statistical Differences Between Binary Forecasts and Real-World Payoffs.” International Journal of Forecasting, April. https://doi.org/10.1016/j.ijforecast.2019.12.004.

Tversky, A., and D. Kahneman. 1981. “The Framing of Decisions and the Psychology of Choice.” Science 211 (4481): 453–58. https://doi.org/10.1126/science.7455683.

Tversky, Amos, and Itamar Gati. 1982. “Similarity, Separability, and the Triangle Inequality.” Psychological Review 89 (2): 123–54. https://doi.org/10.1037/0033-295X.89.2.123.

Tversky, Amos, and Daniel Kahneman. 1973. “Availability: A Heuristic for Judging Frequency and Probability.” Cognitive Psychology 5 (2): 207–32. https://doi.org/10.1016/0010-0285(73)90033-9.

———. 1974. “Judgment Under Uncertainty: Heuristics and Biases.” Science 185 (4157): 1124–31. https://doi.org/10.1126/science.185.4157.1124.

Wolpert, Daniel M, R Chris Miall, and Mitsuo Kawato. 1998. “Internal Models in the Cerebellum.” Trends in Cognitive Sciences 2: 338–47. https://doi.org/10.1016/S1364-6613(98)01221-2.

Wolpert, David H. 2010. “Why Income Comparison Is Rational.” Ecological Economics 69 (2): 458–74. https://doi.org/10.1016/j.geb.2009.12.001.