M-open, M-complete, M-closed



Placeholder. I encountered this concept and thought it was nifty but have not time to do anything other than note it. M-open, M-complete, M-closed describe different relations between your hypothesis class and reality - basically, do I have the true model in my hypothesis class (no, I do not ever, except with synthetic data) and if not, what does my estimation procedure get me?

(Le and Clarke 2017) summarise

For the sake of completeness, we recall that Bernardo and Smith (Bernardo and Smith 2000) define M-closed problems as those for which a true model can be identified and written down but is one amongst finitely many models from which an analyst has to choose. By contrast, M-complete problems are those in which a true model (sometimes called a belief model) exists but is inaccessible in the sense that even though it can be conceptualized it cannot be written down or at least cannot be used directly. Effectively this means that other surrogate models must be identified and used for inferential purposes. M-open problems according to Bernardo and Smith (2000) are those problems where a true model exists but cannot be specified at all.

References

Bernardo, José M., and Adrian F. M. Smith. 2000. Bayesian Theory. 1 edition. Chichester: Wiley.
Clarke, Bertrand. 2003. Comparing Bayes Model Averaging and Stacking When Model Approximation Error Cannot Be Ignored.” The Journal of Machine Learning Research 4 (null): 683–712.
Dellaporta, Charita, Jeremias Knoblauch, Theodoros Damoulas, and François-Xavier Briol. 2022. Robust Bayesian Inference for Simulator-Based Models via the MMD Posterior Bootstrap.” arXiv:2202.04744 [Cs, Stat], February.
Le, Tri, and Bertrand Clarke. 2017. A Bayes Interpretation of Stacking for \(\mathcal{M}\)-Complete and \(\mathcal{M}\)-Open Settings.” Bayesian Analysis 12 (3): 807–29.
Lyddon, Simon, Stephen Walker, and Chris Holmes. 2018. Nonparametric Learning from Bayesian Models with Randomized Objective Functions.” In Proceedings of the 32nd International Conference on Neural Information Processing Systems, 2075–85. NIPS’18. Red Hook, NY, USA: Curran Associates Inc.
Matsubara, Takuo, Jeremias Knoblauch, François-Xavier Briol, and Chris J. Oates. 2021. Robust Generalised Bayesian Inference for Intractable Likelihoods.” arXiv:2104.07359 [Math, Stat], April.
Minka, Thomas P. 2002. Bayesian Model Averaging Is Not Model Combination.”
Pacchiardi, Lorenzo, and Ritabrata Dutta. 2022. Generalized Bayesian Likelihood-Free Inference Using Scoring Rules Estimators.” arXiv:2104.03889 [Stat], March.
Schmon, Sebastian M., Patrick W. Cannon, and Jeremias Knoblauch. 2021. Generalized Posteriors in Approximate Bayesian Computation.” arXiv:2011.08644 [Stat], February.
Yao, Yuling, Aki Vehtari, Daniel Simpson, and Andrew Gelman. 2018. Using Stacking to Average Bayesian Predictive Distributions.” Bayesian Analysis 13 (3): 917–1007.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.