Science for policy

Using evidence and reason to govern ourselves

August 7, 2011 — September 22, 2021

communicating
crisis
economics
ethics
how do science
mind
probability
risk
statistics
wonk
Figure 1

OK, so we can’t hope for predictions of the outcome of complex, large and unique things within the usual setup of control-trial scientific research What can we hope for? What should we do with it?

1 Measuring results in wicked problems

Is hard. But we do it successfully sometimes, otherwise how would any government or business persist? See analytics.

Figure 2: First dog on the moon: If we let science and reason dictate policy we may as well surrender to the robots now

2 Stylized simulation

There are some suggestions in Bankes (2002):

  • conceive and employ ensembles of alternatives rather than single models, policies, or scenarios;

  • use principles of invariance or robustness to devise recommended conclusions or policies;

  • use adaptive methods to meet the challenge of deep uncertainty; and

  • iteratively seek improved conclusions through interactive mechanisms that combine human and machine capabilities.

Of course, most of these are ruled out by the modern policy cycle, are they not? When can you advise someone to adopt a policy of rapidly trying stuff out and adapting when it doesn’t work? Is there anything less compatible with the modern policy cycle than policy cycles which deviate from an electoral term and outcomes which are delivered as anything other than certain? When push come to ballot, does any political rhetoric run on anything like statistically valid evidence, or is it all trail by anecdote and “common sense”?

Pffft, political pragmatism. Let’s think about what we might do.

3 Causation in policy

Causation can be made well-defined, possibly even for causality in social systems, but is it worth your time to do so in political discourse? Possibly a red herring, for the same reason touched upon above. When it comes to The World we rarely have access to big scalar “causal” levers to pull. Casually framed causal questions in policy, where they are meaningful at all, are often uninteresting. e.g.

  • “Does pornography cause sexual violence?”
  • “Do needle exchanges increase the incidence of drug use?”

These kind of questions are not even well posed, and even if they were defined enough to be answerable, they would be useless. More pertinent questions, of the sort I like to imagine we are actually considering:

  • If we, e.g., ban certain forms of pornography, what effect might that have on sexual violence? How much? Over what time frame might we expect to see results to re-evaluate our policy? What other side effects would such a policy have? What other things influence sexual violence? What is the most efficient one to tackle?

  • If we legalise needle exchanges, what effect will that have on the total harm to society of drug usage? When? How about if we subsidise needle exchanges? Provide youth activities in high risk areas? How would these policies interact? etc.

I suppose this frames the use of science for policy as kind of utilitarian stochastic calculus, which is both more and less than I think it could do, but it will serve as a first pass.

If propagating that idea is, in itself is enough to slightly lower the incidence of people on talk shows demanding of experts, or one another, “but does [loaded issue A] cause [ghastly consequence B]?” then my blood pressure will benefit.

update: some of this is now formalised as external validity

4 How to do science for policy

Figure 3

Still working on this one. Consider classic science (which is paradigmatically physics, for your average contemporary working philosopher of science.) There we are greatly concerned with causation, by which we mean specific effects which will reliably and quantifiably induced by specific perturbations, and we nut it out by setting up the same perturbations and observing the same effect, time after time after time. Then we try to come up with maximally general or elegant explanations for those results, then get Nobel prizes all round, heartily congratulate our colleagues and everyone lives happily ever after, eventually with jetpacks and benign post-human intelligences. And nanobots.

That’s what it seems like from over in social science, anyway.

For policy, things are fiddlier. Policy questions have huge numbers of interacting variables, highly contingent answers and are often one-offs. Sometimes many of the interacting variables can be reduced to simpler, more empirically sound sub-systems, and sometimes not. Moreover, living systems tend to be non-stationary, grossly non-ergodic and highly path dependent. What is the best we can do under such circumstances?

5 Designing systems that manage themselves

See mechanism design or perhaps community governance.

6 Post normal science

What was this mess? Philosophy of science for existential risk.

Defined, pace Kuhn, by Funtowicz and Ravetz (1994) using a great many words that I shall roughly distil thus: Science in the domain where we can no longer access a semblance of a large ensemble of experiments upon which to test our hypotheses. This is a problem for science as such, which does best when there are many observations to statistically smooth out the imperfections in our tests so that we can to come to a description of some underlying dynamics. Or, at least, that’s what I was told in high school, measuring the slope of our stop-watch-timed weight-dropping measurements to find a value for gravity.

The prototypical example of a subject of post normal science is the Earth. We can’t extrapolate in any meaningful way from a single data point, so lots of questions pertaining to it dynamics become difficult. Since Earth is the only planet with life we know, how many others out there have life? Since this is the only industrial economy coupled to a planetary atmospheric system that we have known, how do we know if global warming will occur and mess us up royally? Must we throw up our hands in despair and fall back on rhetoric? e.g. Ravetz (1999) mentions cases of users arguing that climate change models are a Baudrillardian seduction. But ideally, methodologically, I mean, are there alternatives?

Yes.

The how-much-life-in-the-galaxy thing is not intractable — for example the speed with which life arose on our planet gives hints to the ease of life formation — Stunt-Bayesians love that kind of thing (Spiegel and Turner 2012).

More usefully, let us consider what Nassim Taleb has usefully branded with a metaphor about black swans.

7 Planning under uncertainty

TBC. Nassim Taleb has a whole career based on handling heavy-tailed risk and managing out-of-sample downsides (Taleb 2007, 2020). This subsection needs a better name and a notebook of its own. Contrariness dictates I will not use Taleb’s terms.

8 Incoming

9 References

Bankes. 2002. Tools and Techniques for Developing Policies for Complex and Uncertain Systems.” Proceedings of the National Academy of Sciences.
Cirillo, and Taleb. 2020. Tail Risk of Contagious Diseases.” Nature Physics.
Dawes, Faust, and Meehl. 1989. Clinical Versus Actuarial Judgment.” Science.
Freedman, and Stark. 2009. What Is the Chance of an Earthquake? In Statistical Models and Causal Inference: A Dialogue with the Social Sciences.
Funtowicz, and Ravetz. 1994. The Worth of a Songbird: Ecological Economics as a Post-Normal Science.” Ecological Economics.
Grosz, Rohrer, and Thoemmes. 2020. The Taboo Against Explicit Causal Inference in Nonexperimental Psychology.” Perspectives on Psychological Science.
Heck, Chabris, Watts, et al. 2020. Objecting to Experiments Even While Approving of the Policies or Treatments They Compare.” Proceedings of the National Academy of Sciences.
Kerkhoff. 1996. “Through the Looking Glass: The Role and Analysis of Metaphorical Language in Interdisciplinary Science.”
Midgley. 2001. Systemic Intervention - Philosophy, Methodology and Practice (Contemporary Systems Thinking) (Contemporary Systems Thinking).
Ravetz. 1999. “Models as Metaphors: A New Look at Science.” Urban Lifestyles, Sustainability and Integrated Environmental Assessment (Ulysses) Working Paper WP-99-3. Ulysses Project.
Reyna. 2021. A Scientific Theory of Gist Communication and Misinformation Resistance, with Implications for Health, Education, and Policy.” Proceedings of the National Academy of Sciences.
Sloman, and Fernbach. 2017. The Knowledge Illusion: Why We Never Think Alone.
Spiegel, and Turner. 2012. Bayesian Analysis of the Astrobiological Implications of Life’s Early Emergence on Earth.” Proceedings of the National Academy of Sciences.
Taleb. 2007. Black Swans and the Domains of Statistics.” The American Statistician.
———. 2020. On the Statistical Differences Between Binary Forecasts and Real-World Payoffs.” International Journal of Forecasting.
Thagard, and Zhu. 2003. “Acupuncture, Incommensurability, and Conceptual Change.” Intentional Conceptual Change.