You want to use some tasty tool, such as a
hierarchical model
without anyone getting cross at you for apostasy by doing it in the wrong discipline?
Why not use *whatever* estimator works, and then show that it works on both frequentist and Bayesian grounds?

There is a basic result, due to Doob, which essentially says that the Bayesian learner is consistent, except on a set of data of prior probability zero. That is, the Bayesian is subjectively certain they will converge on the truth. This is not as reassuring as one might wish, and showing Bayesian consistency under the true distribution is harder. In fact, it usually involves assumptions under which non-Bayes procedures will also converge. [β¦]

Concentration of the posterior around the truth is only a preliminary. One would also want to know that, say, the posterior mean converges, or even better that the predictive distribution converges. For many finite-dimensional problems, whatβs called the βBernstein-von Mises theoremβ basically says that the posterior mean and the maximum likelihood estimate converge, so if one works the other will too. This breaks down for infinite-dimensional problems.

(Bernardo and de Valencia 2006), in the context of βObjective Bayesβ, argues for frequentist methods as necessary.

Bayesian Statistics is typically taught, if at all, after a prior exposure to frequentist statistics. It is argued that it may be appropriate to reverse this procedure. Indeed, the emergence of powerful objective Bayesian methods (where the result, as in frequentist statistics, only depends on the assumed model and the observed data), provides a new unifying perspective on most established methods, and may be used in situations (e.g. hierarchical structures) where frequentist methods cannot. On the other hand, frequentist procedures provide mechanisms to evaluate and calibrate any procedure. Hence, it may be the right time to consider an integrated approach to mathematical statistics, where objective Bayesian methods are first used to provide the building elements, and frequentist methods are then used to provide the necessary evaluation.

## Nonparametric

Bayes nonparametrics sound like they might avoid the problem of failing to include the true model but they can also fail in weird ways.

## Variational

I am not sure how this works. But it is important (Wang and Blei 2017).

## References

*Proceedings of the Thirty-Seventh Annual ACM Symposium on Theory of Computing*, 634. ACM Press.

*Advances In Neural Information Processing Systems*.

*The Annals of Statistics*4 (6): 1236β39.

*Statistical Science*19 (1): 58β80.

*The Annals of Statistics*21 (2): 903β23.

*The Annals of Statistics*14 (1): 1β26.

*Le Calcul Des ProbabilitΓ©s Et Ses Applications*, 23β27. Colloques Internationaux Du Centre National de La Recherche Scientifique, No. 13. Centre National de la Recherche Scientifique, Paris.

*The Annals of Applied Statistics*6 (4): 1971β97.

*Journal of the Royal Statistical Society: Series B (Statistical Methodology)*77 (3): 617β46.

*Econometric Theory*32 (1): 71β121.

*arXiv:1905.08737 [Stat]*, May.

*The Annals of Statistics*27 (4): 1119β41.

*Bayesian Analysis*3 (3).

*The Annals of Applied Statistics*2 (4): 1360β83.

*The Annals of Statistics*49 (1): 182β202.

*The Annals of Statistics*34 (2): 837β77.

*The Annals of Statistics*39 (5).

*Ecology Letters*10 (7): 551.

*Journal of the American Statistical Association*105 (492): 1617β25.

*arXiv:1410.7600 [Math, Stat]*, October.

*The American Statistician*38 (2): 135β36.

*Annual Review of Statistics and Its Application*3 (1): 211β31.

*Electronic Journal of Statistics*3: 1039β74.

*Unpublished Chapter, Department of Economics, Princeton University*.

*arXiv:1310.4489 [Math, Stat]*, October.

*Journal of the Royal Statistical Society. Series B (Methodological)*58 (1): 267β88.

*Journal of Ornithology*152 (2): 393β408.

*arXiv:1705.03439 [Cs, Math, Stat]*, May.

*Statistical Science*26 (3): 322β25.

## No comments yet. Why not leave one?