M-open, M-complete, M-closed



Placeholder. I encountered the M-open concept and thought it was useful to be able to state, but I have not had time to really delve into it. M-open, M-complete, M-closed describe different relations between our hypothesis class and reality - basically, do we have the true model in my hypothesis class (spoiler: no, I do not, except with synthetic data) and if not, what does our estimation procedure get us?

Fancy persons write this as \(\mathcal{M}\)-open etc, but life is too short for indulgent typography.

Le and Clarke (2017) summarise:

For the sake of completeness, we recall that Bernardo and Smith (Bernardo and Smith 2000) define M-closed problems as those for which a true model can be identified and written down but is one amongst finitely many models from which an analyst has to choose. By contrast, M-complete problems are those in which a true model (sometimes called a belief model) exists but is inaccessible in the sense that even though it can be conceptualized it cannot be written down or at least cannot be used directly. Effectively this means that other surrogate models must be identified and used for inferential purposes. M-open problems according to Bernardo and Smith (2000) are those problems where a true model exists but cannot be specified at all.

They also mention Clyde and Iversen (2013) as a useful resource.

You will note that many of the references are interested in this concept because of application to Bayesian model stacking.

To mention: likelihood principle, decision-theory of, black swans, misspecified models

References

Bernardo, José M., and Adrian F. M. Smith. 2000. Bayesian Theory. 1 edition. Chichester: Wiley.
Clarke, Bertrand. 2003. Comparing Bayes Model Averaging and Stacking When Model Approximation Error Cannot Be Ignored.” The Journal of Machine Learning Research 4 (null): 683–712.
Clyde, Merlise, and Edwin S Iversen. 2013. Bayesian Model Averaging in the M-Open Framework.” In Bayesian Theory and Applications, edited by Paul Damien, Petros Dellaportas, Nicholas G. Polson, and David A. Stephens, 0. Oxford University Press.
Dellaporta, Charita, Jeremias Knoblauch, Theodoros Damoulas, and François-Xavier Briol. 2022. Robust Bayesian Inference for Simulator-Based Models via the MMD Posterior Bootstrap.” arXiv:2202.04744 [Cs, Stat], February.
Le, Tri, and Bertrand Clarke. 2017. A Bayes Interpretation of Stacking for M-Complete and M-Open Settings.” Bayesian Analysis 12 (3): 807–29.
Lyddon, Simon, Stephen Walker, and Chris Holmes. 2018. Nonparametric Learning from Bayesian Models with Randomized Objective Functions.” In Proceedings of the 32nd International Conference on Neural Information Processing Systems, 2075–85. NIPS’18. Red Hook, NY, USA: Curran Associates Inc.
Masegosa, Andrés R. 2020. Learning Under Model Misspecification: Applications to Variational and Ensemble Methods.” In Proceedings of the 34th International Conference on Neural Information Processing Systems, 5479–91. NIPS’20. Red Hook, NY, USA: Curran Associates Inc.
Matsubara, Takuo, Jeremias Knoblauch, François-Xavier Briol, and Chris J. Oates. 2021. Robust Generalised Bayesian Inference for Intractable Likelihoods.” arXiv:2104.07359 [Math, Stat], April.
Minka, Thomas P. 2002. Bayesian Model Averaging Is Not Model Combination.”
Pacchiardi, Lorenzo, and Ritabrata Dutta. 2022. Generalized Bayesian Likelihood-Free Inference Using Scoring Rules Estimators.” arXiv:2104.03889 [Stat], March.
Schmon, Sebastian M., Patrick W. Cannon, and Jeremias Knoblauch. 2021. Generalized Posteriors in Approximate Bayesian Computation.” arXiv:2011.08644 [Stat], February.
Yao, Yuling, Aki Vehtari, Daniel Simpson, and Andrew Gelman. 2018. Using Stacking to Average Bayesian Predictive Distributions.” Bayesian Analysis 13 (3): 917–1007.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.