Bayes inference in an open world
Realizability, infrabayesianism, M-open, M-closed, mis-specification
2016-05-30 — 2026-01-14
Wherein the consequences of model misspecification are examined, M-open practices such as stacking and tempered Gibbs posteriors are considered, and infrabayesianism is presented via convex infradistributions.
It turns out that all models are wrong. We’re used to that, but rarely account for it in Bayesian inference. If my model is “wrong” in the sense that the ground truth is not in the hypothesis class, can I treat it “as if” it were correct, and still recover something like “nearly good” inference anyway? Does it really matter if I didn’t take into account that my model was misspecified?
The answer is: it depends. M-open is terminology used to describe different relations between our hypothesis class and reality, and to help us reason about how bad it is that our simplifications are not perfect. Infrabayesianism is a set of strategies to do principled reasoning with over-simplified models.
Here’s where I take some notes on this.
1 M-open Bayes
Fancy folks write M-open as \(\mathcal{M}\)-open, but life’s too short for indulgent typography.
Le and Clarke (2017) summarises:
For the sake of completeness, we recall that Bernardo and Smith (Bernardo and Smith 2000) define M-closed problems as those for which a true model can be identified and written down but is one amongst finitely many models from which an analyst has to choose. By contrast, M-complete problems are those in which a true model (sometimes called a belief model) exists but is inaccessible in the sense that even though it can be conceptualised it cannot be written down or at least cannot be used directly. Effectively this means that other surrogate models must be identified and used for inferential purposes. M-open problems according to Bernardo and Smith (2000) are those problems where a true model exists but cannot be specified at all.
They also mention Clyde and Iversen (2013) as a useful resource.
My understanding is as follows: In statistical modelling, we often operate under a convenient fiction: that somewhere within our set of candidate models lies the “true” process that generated our data. This is known as the M-closed setting. There is also M-complete somewhere in there, but this does not seem to be a popular category in practice, so I won’t disambiguate it here. But what happens when we acknowledge that our models are, at best, useful approximations of a reality far more complex than they can capture?
This brings us to the M-open setting, where we accept that the true data-generating process is fundamentally outside of our model class. This is, of course, the state of the world for most complex, real-world systems, because the map is not the territory, which was is surprisingly not always acknowledged.
If no model is “true,” the goal of inference is no longer to identify that true model. Instead, we focus on predictive performance and robust decision-making under unavoidable model misspecification. The archetypal question changes from “Which model is right?” to “Which model is most useful, and how can we mitigate the risks of it being wrong?”
Related: likelihood principle, decision-theory, black swans, …
Bayesians have developed a pragmatic set of tools for navigating the M-open world, emphasizing practical performance over theoretical purity. The next few sections explore popular alternatives.
2 Ignore mis-specification
The default, and very popular in practice.
3 Stacking
The most common approach in early M-open applications is to useBayesian model stacking.
If we can’t trust any single model, why not combine them in such a way that the models make up for each other’s deficiencies? Instead of traditional Bayesian model averaging, M-open practice favours stacking. Stacking uses cross-validation to find the optimal weights for combining multiple models into a single predictive distribution that performs best on out-of-sample data.
Specifically, we use Leave-one-out cross-validation (LOO) and its efficient approximation, PSIS (Pareto smoothed importance sampling). These methods provide a robust estimate of a model’s predictive accuracy, helping practitioners choose and combine models in a way that is explicitly geared for performance in the face of misspecification. The focus is on building a better predictive engine (Le and Clarke 2017), but not aspiring to find some imaginary ground truth in the model set.
4 Generalized and Gibbs Posteriors
Standard Bayesian updating can behave poorly when the model is misspecified, sometimes becoming over-confidently wrong as more data comes in. Generalized Bayesian posteriors somewhat address this by “tempering” the likelihood with a learning rate parameter (η). This down-weights the influence of the likelihood, preventing the model from becoming too concentrated on a flawed representation of the world.
This approach, also known as a Gibbs posterior, helps repair some of the statistical inconsistencies that arise under misspecification. Some methods, like SafeBayes (Thomas and Corander 2019), even learn the optimal tempering rate from the data itself, offering a more adaptive way to handle the mismatch between model and reality.
My read on this approach that it a pragmatic robustification procedure, but not terribly theoretically satisfying.
5 Alternative Bayes foundations
Infrabayesianism, Maximin expected utility, and other approaches rebuild the foundations of reasoning to handle misspecification from the ground up. See [Imprecise Bayesianism](./bayes_imprecise.qmd
