You want to use some tasty tool, such as a
without anyone getting cross at you for apostasy by doing it in the wrong discipline?
Why not use whatever estimator works, and then show that it works on both frequentist and Bayesian grounds?
There is a basic result, due to Doob, which essentially says that the
Bayesian learner is consistent, except on a set of data of prior
That is, the Bayesian is subjectively certain they will converge on the
This is not as reassuring as one might wish, and showing Bayesian
consistency under the true distribution is harder.
In fact, it usually involves assumptions under which non-Bayes procedures
will also converge. […]
Concentration of the posterior around the truth is only a preliminary.
One would also want to know that, say, the posterior mean converges, or
even better that the predictive distribution converges.
For many finite-dimensional problems, what’s called the “Bernstein-von
Mises theorem” basically says that the posterior mean and the maximum
likelihood estimate converge, so if one works the other will too.
This breaks down for infinite-dimensional problems.
(Bernardo and de Valencia 2006), in the context of “Objective Bayes”, argues for frequentist methods as necessary.
Bayesian Statistics is typically taught, if at all, after a prior exposure to
frequentist statistics. It is argued that it may be appropriate to reverse
this procedure. Indeed, the emergence of powerful objective Bayesian methods
(where the result, as in frequentist statistics, only depends on the assumed
model and the observed data), provides a new unifying perspective on most
established methods, and may be used in situations (e.g. hierarchical
structures) where frequentist methods cannot. On the other hand, frequentist
procedures provide mechanisms to evaluate and calibrate any procedure. Hence,
it may be the right time to consider an integrated approach to mathematical
statistics, where objective Bayesian methods are first used to provide the
building elements, and frequentist methods are then used to provide the
Aaronson, Scott. 2005. “The Complexity of Agreement.”
In Proceedings of the Thirty-Seventh Annual ACM Symposium on Theory of Computing
, 634. ACM Press.
Advani, Madhu, and Surya Ganguli. 2016. “An Equivalence Between High Dimensional Bayes Optimal Inference and M-Estimation.”
In Advances In Neural Information Processing Systems
Aumann, Robert J. 1976. “Agreeing to Disagree.” The Annals of Statistics
4 (6): 1236–39.
Bayarri, M. J., and J. O. Berger. 2004. “The Interplay of Bayesian and Frequentist Analysis.” Statistical Science
19 (1): 58–80.
Bernardo, Jose M, and Universitat de Valencia. 2006. “A Bayesian Mathematical Statistics Primer,” 6.
Cox, Dennis D. 1993. “An Analysis of Bayesian Inference for Nonparametric Regression.” The Annals of Statistics
21 (2): 903–23.
Diaconis, Persi, and David Freedman. 1986. “On the Consistency of Bayes Estimates.” The Annals of Statistics
14 (1): 1–26.
Doob, J. L. 1949. “Application of the Theory of Martingales.”
In Le Calcul Des Probabilités Et Ses Applications
, 23–27. Colloques Internationaux Du Centre National de La Recherche Scientifique, No. 13. Centre National de la Recherche Scientifique, Paris.
Efron, Bradley. 2012. “Bayesian Inference and the Parametric Bootstrap.” The Annals of Applied Statistics
6 (4): 1971–97.
———. 2015. “Frequentist Accuracy of Bayesian Estimates.” Journal of the Royal Statistical Society: Series B (Statistical Methodology)
77 (3): 617–46.
Florens, Jean-Pierre, and Anna Simoni. 2016. “Regularizing Priors for Linear Inverse Problems.” Econometric Theory
32 (1): 71–121.
Fong, Edwin, and Chris Holmes. 2019. “On the Marginal Likelihood and Cross-Validation.” arXiv:1905.08737 [Stat]
Gelman, Andrew. 2008. “Rejoinder.” Bayesian Analysis
Gelman, Andrew, Aleks Jakulin, Maria Grazia Pittau, and Yu-Sung Su. 2008. “A Weakly Informative Default Prior Distribution for Logistic and Other Regression Models.” The Annals of Applied Statistics
2 (4): 1360–83.
Kleijn, B. J. K. 2021. “Frequentist Validity of Bayesian Limits.” The Annals of Statistics
49 (1): 182–202.
Kleijn, B. J. K., and A. W. van der Vaart. 2006. “Misspecification in Infinite-Dimensional Bayesian Statistics.” The Annals of Statistics
34 (2): 837–77.
Knapik, B. T., A. W. van der Vaart, and J. H. van Zanten. 2011. “Bayesian Inverse Problems with Gaussian Priors.” The Annals of Statistics
Lele, Subhash R., Khurram Nadeem, and Byron Schmuland. 2010. “Estimability and Likelihood Inference for Generalized Linear Mixed Models Using Data Cloning.” Journal of the American Statistical Association
105 (492): 1617–25.
Rousseau, Judith. 2016. “On the Frequentist Properties of Bayesian Nonparametric Methods.” Annual Review of Statistics and Its Application
3 (1): 211–31.
Shalizi, Cosma Rohilla. 2009. “Dynamics of Bayesian Updating with Dependent Data and Misspecified Models.” Electronic Journal of Statistics
Sims, C. 2010. “Understanding Non-Bayesians.” Unpublished Chapter, Department of Economics, Princeton University
Szabó, Botond, Aad van der Vaart, and Harry van Zanten. 2013. “Frequentist Coverage of Adaptive Nonparametric Bayesian Credible Sets.” arXiv:1310.4489 [Math, Stat]
Tibshirani, Robert. 1996. “Regression Shrinkage and Selection via the Lasso.” Journal of the Royal Statistical Society. Series B (Methodological)
58 (1): 267–88.
Wang, Yixin, and David M. Blei. 2017. “Frequentist Consistency of Variational Bayes.” arXiv:1705.03439 [Cs, Math, Stat]
Wasserman, Larry. 2011. “Frasian Inference.” Statistical Science
26 (3): 322–25.