M-open, M-complete, M-closed

May 30, 2016 — July 23, 2023

Bayes
how do science
statistics
Figure 1

Placeholder. I encountered the M-open concept and thought it was useful to be able to state, but I have not had time to really delve into it. M-open, M-complete, M-closed describe different relations between our hypothesis class and reality - basically, do we have the true model in my hypothesis class (spoiler: no, I do not, except with synthetic data) and if not, what does our estimation procedure get us?

Fancy persons write this as \(\mathcal{M}\)-open etc, but life is too short for indulgent typography.

Le and Clarke (2017) summarise:

For the sake of completeness, we recall that Bernardo and Smith (Bernardo and Smith 2000) define M-closed problems as those for which a true model can be identified and written down but is one amongst finitely many models from which an analyst has to choose. By contrast, M-complete problems are those in which a true model (sometimes called a belief model) exists but is inaccessible in the sense that even though it can be conceptualized it cannot be written down or at least cannot be used directly. Effectively this means that other surrogate models must be identified and used for inferential purposes. M-open problems according to Bernardo and Smith (2000) are those problems where a true model exists but cannot be specified at all.

They also mention Clyde and Iversen (2013) as a useful resource.

You will note that many of the references are interested in this concept because of application to Bayesian model stacking.

Related: likelihood principle, decision-theory, black swans, misspecified models, Robust bayes

1 Gibbs posteriors

Gibbs posteriors seem to be an attempt to address the M-open problem, by removing the need for a valid likelihood.

2 References

Berger, Wolpert, Bayarri, et al. 1988. The Likelihood Principle.” Lecture Notes-Monograph Series.
Bernardo, and Smith. 2000. Bayesian Theory.
Clarke. 2003. Comparing Bayes Model Averaging and Stacking When Model Approximation Error Cannot Be Ignored.” The Journal of Machine Learning Research.
Clyde, and Iversen. 2013. Bayesian Model Averaging in the M-Open Framework.” In Bayesian Theory and Applications.
Dellaporta, Knoblauch, Damoulas, et al. 2022. Robust Bayesian Inference for Simulator-Based Models via the MMD Posterior Bootstrap.” arXiv:2202.04744 [Cs, Stat].
Jansen. n.d. “Robust Bayesian Inference Under Model Misspecification.”
Knoblauch, Jewson, and Damoulas. 2019. Generalized Variational Inference: Three Arguments for Deriving New Posteriors.”
———. 2022. “An Optimization-Centric View on Bayes’ Rule: Reviewing and Generalizing Variational Inference.” Journal of Machine Learning Research.
Le, and Clarke. 2017. A Bayes Interpretation of Stacking for M-Complete and M-Open Settings.” Bayesian Analysis.
Lyddon, Walker, and Holmes. 2018. Nonparametric Learning from Bayesian Models with Randomized Objective Functions.” In Proceedings of the 32nd International Conference on Neural Information Processing Systems. NIPS’18.
Masegosa. 2020. Learning Under Model Misspecification: Applications to Variational and Ensemble Methods.” In Proceedings of the 34th International Conference on Neural Information Processing Systems. NIPS’20.
Matsubara, Knoblauch, Briol, et al. 2022. Robust Generalised Bayesian Inference for Intractable Likelihoods.” Journal of the Royal Statistical Society Series B: Statistical Methodology.
Minka. 2002. Bayesian Model Averaging Is Not Model Combination.”
Pacchiardi, and Dutta. 2022. Generalized Bayesian Likelihood-Free Inference Using Scoring Rules Estimators.” arXiv:2104.03889 [Stat].
Schmon, Cannon, and Knoblauch. 2021. Generalized Posteriors in Approximate Bayesian Computation.” arXiv:2011.08644 [Stat].
Yao, Vehtari, Simpson, et al. 2018. Using Stacking to Average Bayesian Predictive Distributions.” Bayesian Analysis.