A formalisation of Occam’s razor of some kind. I see it invoked in model selection.
In the bayes context I think this typically means model selection by optimising marginal likelihood
References
Arora, Sanjeev, and Yi Zhang. 2021. “Rip van Winkle’s Razor: A Simple Estimate of Overfit to Test Data.” arXiv:2102.13189 [Cs, Stat], February.
Barron, A. R., and T. M. Cover. 1991. “Minimum Complexity Density Estimation.” IEEE Transactions on Information Theory 37 (4): 1034–54.
Barron, Andrew R. 1991. “Complexity Regularization with Application to Artificial Neural Networks.” In Nonparametric Functional Estimation and Related Topics, edited by George Roussas, 561–76. NATO ASI Series 335. Springer Netherlands.
Barron, A., J. Rissanen, and Bin Yu. 1998. “The Minimum Description Length Principle in Coding and Modeling.” IEEE Transactions on Information Theory 44 (6): 2743–60.
Collins, Michael, S. Dasgupta, and Robert E Schapire. 2001. “A Generalization of Principal Components Analysis to the Exponential Family.” In Advances in Neural Information Processing Systems. Vol. 14. MIT Press.
Grünwald, Peter. 1996. “A Minimum Description Length Approach to Grammar Inference.” In Connectionist, Statistical, and Symbolic Approaches to Learning for Natural Language Processing, 1040:203–16. Lecture Notes in Computer Science. London, UK, UK: Springer-Verlag.
Grünwald, Peter D. 2004. “A Tutorial Introduction to the Minimum Description Length Principle.” Advances in Minimum Description Length: Theory and Applications, June, 23–81.
———. 2007. The Minimum Description Length Principle. Cambridge, Mass.: MIT Press.
Hansen, Mark H., and Bin Yu. 2001. “Model Selection and the Principle of Minimum Description Length.” Journal of the American Statistical Association 96 (454): 746–74.
Jun Li, and Dacheng Tao. 2013. “Simple Exponential Family PCA.” IEEE Transactions on Neural Networks and Learning Systems 24 (3): 485–97.
Legg, Shane. 2006. “Is There an Elegant Universal Theory of Prediction?” In Algorithmic Learning Theory, edited by José L. Balcázar, Philip M. Long, and Frank Stephan, 274–87. Lecture Notes in Computer Science 4264. Springer Berlin Heidelberg.
Mavromatis, Panayotis. 2009. “Minimum Description Length Modelling of Musical Structure.” Journal of Mathematics and Music 3 (3): 117–36.
Mohamed, Shakir, Zoubin Ghahramani, and Katherine A Heller. 2008. “Bayesian Exponential Family PCA.” In Advances in Neural Information Processing Systems. Vol. 21. Curran Associates, Inc.
Rissanen, J. 1984. “Universal Coding, Information, Prediction, and Estimation.” IEEE Transactions on Information Theory 30 (4): 629–36.
Roy, Nicholas, Geoffrey Gordon, and Sebastian Thrun. 2005. “Finding Approximate POMDP Solutions Through Belief Compression.” Journal of Artificial Intelligence Research 23 (1): 1–40.
Solomonoff, R.J. 1964a. “A Formal Theory of Inductive Inference. Part I.” Information and Control 7 (1): 1–22.
———. 1964b. “A Formal Theory of Inductive Inference. Part II.” Information and Control 7 (2): 224–54.
Sterkenburg, Tom F. 2016. “Solomonoff Prediction and Occam’s Razor.” Philosophy of Science 83 (4): 459–79.
Ullrich, K. 2020. “A Coding Perspective on Deep Latent Variable Models.”
Vitányi, Paul M. 2006. “Meaningful Information.” IEEE Transactions on Information Theory 52 (10): 4617–26.
Wojtowicz, Zachary, and Simon DeDeo. 2020. “From Probability to Consilience: How Explanatory Values Implement Bayesian Reasoning.” Trends in Cognitive Sciences 24 (12): 981–93.
No comments yet. Why not leave one?