Risk, perception of. How humans make judgements under uncertainty, as actors able to form beliefs about the future.
We can also consider the 18,000 people who died from "external causes" in England and Wales in 2008. That’s those people out of the total population of 54 million who died from accidents, murders, suicides and so on (see the Office of National Statistics website for more information). This corresponds to an average of
\[ 18,000 / (54 \times 365) \approx 1 \]
micromort per day, so we can think of a micromort as the average "ration" of lethal risk that people spend each day, and which we do not unduly worry about.
Micromorts are also being used to compare death risks in different countries.
A microlife is a measure of actuarial life risk, telling you how much of your statistical life expectancy (from all causes) that you are burning through.
He also discusses :
many risks we take don’t kill you straight away: think of all the lifestyle frailties we get warned about, such as smoking, drinking, eating badly, not exercising and so on. The microlife aims to make all these chronic risks comparable by showing how much life we lose on average when we’re exposed to them:
a microlife is 30 minutes of your life expectancy
Spiegelhalter is a wizard of risk communication. For example, David Spiegelhalter gives this perspective on COVID-19:
So, roughly speaking, we might say that getting COVID-19 is like packing a year’s worth of risk into a week or two.
Heuristic and biases in risk perception
Trying to work out how people make decisions and whether it is
and what definitions of each of those words would be required to make this work.
Economists often seem to really want the relationship between people’s empirical behaviour in the face of risk and the decision-theoretically-optimal strategy to be simple, in the sense of having a mathematically simple relationship. The evolutionarily plausible kind of simplicity of the idea that it is our messy lumpy Pleistocene brains worry about great white sharks more than loan sharks because the former have more teeth and other such approximate not-too-awful heuristics.
Assigned reading 1: The Beast Upon Your Shoulder, The Price Upon Your Head.
Assigned reading 2: Visualising risk in the NHS Breat cancer screening leaflet
Health information leaflets are designed for those with a reading age of 11, and similar levels of numeracy. But there is some reasonable evidence that people of low reading age and numeracy are less likely to take advantage of options for informed choice and shared decision making. So we are left with the conclusion -
Health information leaflets are designed for people who do not want to read the leaflets.
The classic Ellsberg paradox:
If you’re like most people, you don’t have a preference between B1 and W1, nor between B2 and W2. But most people prefer B1 to B2 and W1 to W2. That is, they prefer “the devil they know”: They’d rather choose the urn with the measurable risk than the one with unmeasurable risk.
This is surprising. The expected payoff from Urn 1 is $50. The fact that most people favor B1 to B2 implies that they believe that Urn 2 contains fewer black balls than Urn 1. But these people most often also favor W1 to W2, implying that they believe that Urn 2 also contains fewer white balls, a contradiction.
Ellsberg offered this as evidence of “ambiguity aversion,” a preference in general for known risks over unknown risks. Why people exhibit this preference isn’t clear. Perhaps they associate ambiguity with ignorance, incompetence, or deceit, or possibly they judge that Urn 1 would serve them better over a series of repeated draws.
The principle was popularized by RAND Corporation economist Daniel Ellsberg, of Pentagon Papers fame.
Rabin and Thaler (2001):
Using expected utility to explain anything more than economically negligible risk aversion over moderate stakes such as $10, $100, and even $1,000 requires a utility-of-wealth function that predicts absurdly severe risk aversion over very large stakes. Conventional expected utility theory is simply not a plausible explanation for many instances of risk aversion that economists study.
Ergodicity economics preferes not to regard this as a bias but as a learning strategy. How does that work then? I don’t know anything about it but it looks a little… cottage?
Mark Buchanan, How ergodicity reimagines economics for the benefit of us all:
The view of expected utility theory is that people should handle it by calculating the expected benefit to come from any possible choice, and choosing the largest. Mathematically, the expected ‘return’ from some choices can be calculated by summing up the possible outcomes, and weighting the benefits they give by the probability of their occurrence.
This inspired LML efforts to rewrite the foundations of economic theory, avoiding the lure of averaging over possible outcomes, and instead averaging over outcomes in time, with one thing happening after another, as in the real world.
I am guessing some kind of online learning type setting, minimising regret over possibly non-stationary data or something? 🤷♂
There is an introduction by Jason Collins which puts it in layperson terms.
Case, Donald O., James E. Andrews, J. David Johnson, and Suzanne L. Allard. 2005. “Avoiding Versus Seeking: The Relationship of Information Seeking to Avoidance, Blunting, Coping, Dissonance, and Related Concepts.” Journal of the Medical Library Association 93 (3): 353–62. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1175801/.
Enke, Benjamin, and Thomas Graeber. 2019. “Cognitive Uncertainty.” Working Paper 26518. National Bureau of Economic Research. https://doi.org/10.3386/w26518.
Freeman, Alexandra L J. 2019. “How to Communicate Evidence to Patients.” Drug and Therapeutics Bulletin 57 (8): 119–24. https://doi.org/10.1136/dtb.2019.000008.
Gigerenzer, Gerd, and Daniel G. Goldstein. 1996. “Reasoning the Fast and Frugal Way: Models of Bounded Rationality.” Psychological Review 103 (4): 650–69. https://doi.org/10.1037/0033-295X.103.4.650.
Gigerenzer, Gerd, and Ulrich Hoffrage. 1995. “How to Improve Bayesian Reasoning Without Instruction: Frequency Formats.” Psychological Review 102 (4): 684–704. https://doi.org/10.1037/0033-295X.102.4.684.
Golman, Russell, David Hagmann, and George Loewenstein. 2017. “Information Avoidance.” Journal of Economic Literature 55 (1): 96–135. https://doi.org/10.1257/jel.20151245.
Golman, Russell, and George Loewenstein. 2015. “Curiosity, Information Gaps, and the Utility of Knowledge.” SSRN Scholarly Paper ID 2149362. Rochester, NY: Social Science Research Network. http://papers.ssrn.com/abstract=2149362.
Howell, Jennifer L., and James A. Shepperd. 2012. “Behavioral Obligation and Information Avoidance.” Annals of Behavioral Medicine 45 (2): 258–63. https://doi.org/10.1007/s12160-012-9451-9.
Kahneman, Daniel. 2003a. “A Psychological Perspective on Economics.” The American Economic Review 93 (2): 162–68. https://doi.org/10.1257/000282803321946985.
———. 2003b. “Maps of Bounded Rationality: Psychology for Behavioral Economics.” The American Economic Review 93 (5): 1449–75.
Kahneman, Daniel, and Amos Tversky. 1979. “Prospect Theory: An Analysis of Decision Under Risk.” Econometrica 47 (2): 263–92. http://www.nolle.se/wp-content/2011/11/Kahneman-Tversky-Prospect-theory79.pdf.
———. 1984. “Choices, Values, and Frames.” American Psychologist 39 (4): 341–50. https://doi.org/10.1037/0003-066X.39.4.341.
Köszegi, Botond, and Matthew Rabin. 2006. “A Model of Reference-Dependent Preferences.” Quarterly Journal of Economics 121 (4): 1133–65. https://doi.org/10.1162/qjec.121.4.1133.
Loomes, Graham, and Robert Sugden. 1982. “Regret Theory: An Alternative Theory of Rational Choice Under Uncertainty Graham Loomes; Robert Sugden.” Economic Journal 92 (368): 805–24.
Peters, Ole. 2019. “The Ergodicity Problem in Economics.” Nature Physics 15 (12, 12): 1216–21. https://doi.org/10.1038/s41567-019-0732-0.
Rabin, Matthew, and Richard H. Thaler. 2001. “Anomalies: Risk Aversion.” The Journal of Economic Perspectives 15 (1): 219–32. https://doi.org/10.2307/2696549.
Samuelson, Paul A. 1938. “A Note on the Pure Theory of Consumer’s Behaviour.” Economica 51. https://doi.org/10.2307/2548836.
Sedlmeier, and Gerd Gigerenzer. 2001. “Teaching Bayesian Reasoning in Less Than Two Hours.” J Exp Psychol Gen 130: 380–400.
Sharpe, Keiran. 2015. “On the Ellsberg Paradox and Its Extension by Machina.” SSRN Scholarly Paper ID 2630471. Rochester, NY: Social Science Research Network. http://papers.ssrn.com/abstract=2630471.
Simon, HA. 1956. “Rational Choice and the Structure of the Environment.” Psychological Review 63 (2): 129–38. https://doi.org/10.1037/h0042769.
Simon, Herbert A. 1975. “Style in Design.” Spatial Synthesis in Computer-Aided Building Design 9: 287–309. http://edra.org/sites/default/files/publications/EDRA02-Simon-1-10_0.pdf.
———. 1958. “"The Decision-Making Schema": A Reply.” Public Administration Review 18 (1): 60–63. https://doi.org/10.2307/973736.
Sloman, Steven A., and Philip Fernbach. 2017. The Knowledge Illusion: Why We Never Think Alone. New York: Riverhead Books.
Tversky, A., and D. Kahneman. 1981. “The Framing of Decisions and the Psychology of Choice.” Science 211 (4481): 453–58. https://doi.org/10.1126/science.7455683.
Tversky, Amos, and Itamar Gati. 1982. “Similarity, Separability, and the Triangle Inequality.” Psychological Review 89 (2): 123–54. https://doi.org/10.1037/0033-295X.89.2.123.
Tversky, Amos, and Daniel Kahneman. 1973. “Availability: A Heuristic for Judging Frequency and Probability.” Cognitive Psychology 5 (2): 207–32. https://doi.org/10.1016/0010-0285(73)90033-9.
———. 1974. “Judgment Under Uncertainty: Heuristics and Biases.” Science 185 (4157): 1124–31. https://doi.org/10.1126/science.185.4157.1124.
Wolpert, Daniel M, R Chris Miall, and Mitsuo Kawato. 1998. “Internal Models in the Cerebellum.” Trends in Cognitive Sciences 2: 338–47. https://doi.org/10.1016/S1364-6613(98)01221-2.
Wolpert, David H. 2010. “Why Income Comparison Is Rational.” Ecological Economics 69 (2): 458–74. https://doi.org/10.1016/j.geb.2009.12.001.