One of the practical forms of Bayesian inference for massively parameterised networks by model averaging.

## Explicit ensembles

Train a collection of networks and calculate empirical means and variances to estimate means posterior predictive
(He, Lakshminarayanan, and Teh 2020; Huang et al. 2016; Lakshminarayanan, Pritzel, and Blundell 2017; Wen, Tran, and Ba 2020; Xie, Xu, and Chuang 2013).
This is neat and on one hand we might think there is nothing special to do here since itβs already more or less classical model ensembling, as near as I can tell.
But in practice there are lots of tricks needed to make this go in a neural network context, in particular because models are already supposed to be so big that they strain the GPU; having *many* such models is presumably ridiculous.
You need tricks.
There are various such tricks. BatchEnsemble is one (Wen, Tran, and Ba 2020).

Cute: Justin Domke, in The human regression ensemble, creates ensembles of curves that he drew through datapoints on a PDF and gets pretty good results.

## Dropout

Dropout is an *implicit* ensembling method.
Or maybe *the* implicit ensembling method; I am not aware of others.
Recommended reading: Foong et al. (2019); Gal, Hron, and Kendall (2017); Kingma, Salimans, and Welling (2015).

A popular kind of noise layer which randomly zeroes out some coefficients in the net when training (and optionally while predicting.)
A coarse resemblance to random forests etc is pretty immediate, and indeed you can just use those instead.
Here, however, we are trying to average over *strong* learners, not weak learners.

The key insight here is that dropout can be rationalized, apparently, as model averaging and thence as a kind of implicit probabilistic learning because in the limit it approaches a certain deep Gaussian process (Kingma, Salimans, and Welling 2015; Gal and Ghahramani 2016b, 2015). Leveraging this argument there are some papers that claim to approximate Bayesian inference by randomizing dropout (M. Kasim et al. 2019; M. F. Kasim et al. 2020).

AFAICT current consensus seems to be that highly cited and very simple model of Gal and Ghahramani (2015) is flawed, and that the rather more onerous approach of Kingma, Salimans, and Welling (2015) is how you would use dropout as a more reasonable posterior; So much was said in a seminar, but I have not really used either paper in practice so I cannot comment.

## Alternate model combinations

Should we stop weighting hypotheses and start βstackingβ? Yao et al. (2018) (also how is that different?)

The widely recommended procedure of Bayesian model averaging is flawed in the M-open setting in which the true data-generating process is not one of the candidate models being fit. We take the idea of stacking from the point estimation literature and generalize to the combination of predictive distributions, extending the utility function to any proper scoring rule, using Pareto smoothed importance sampling to efficiently compute the required leave-one-out posterior distributions and regularization to get more stability.

## Distilling

So apparently you can train a model to emulate an ensemble of similar models?
Great terminology here; Hinton, Vinyals, and Dean (2015) refer to *distilling* of *dark knowledge*.

See Bubeck on this: Three mysteries in deep learning: Ensemble, knowledge distillation, and self-distillation.

## Via NTK

How does this work? He, Lakshminarayanan, and Teh (2020).

## Questions

These methods focus generally on the posterior *predictive*.
How do I find posteriors for parameter values in my model without including them in my predictive loss explicitly?
If many of my parameters are not interpretable I am naturally tempted to fit some by Maximum Likelihood, take them as given then update posteriors over the remainder, but this does not look like a principled inference procedure.

## References

*arXiv:2110.11216 [Cs, Math, Stat]*, October.

*Mathematics of Computation*91 (335): 1247β80.

*The Journal of Machine Learning Research*4 (null): 683β712.

*arXiv:2012.07244 [Cs]*, March.

*arXiv:2106.14806 [Cs, Stat]*.

*4th Workshop on Bayesian Deep Learning (NeurIPS 2019)*, 17.

*Proceedings of the 33rd International Conference on Machine Learning (ICML-16)*.

*arXiv:1512.05287 [Stat]*.

*4th International Conference on Learning Representations (ICLR) Workshop Track*.

*arXiv:1506.02157 [Stat]*, May.

*arXiv:1705.07832 [Stat]*, May.

*arXiv:1805.08034 [Cs, Math]*, May.

*Advances in Neural Information Processing Systems*. Vol. 33.

*arXiv:1503.02531 [Cs, Stat]*, March.

*Computer Vision β ECCV 2016*, edited by Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling, 646β61. Lecture Notes in Computer Science. Cham: Springer International Publishing.

*arXiv:2001.08055 [Physics, Stat]*, January.

*Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2*, 2575β83. NIPSβ15. Cambridge, MA, USA: MIT Press.

*Inverse Problems*35 (9): 095005.

*Proceedings of the 31st International Conference on Neural Information Processing Systems*, 6405β16. NIPSβ17. Red Hook, NY, USA: Curran Associates Inc.

*Bayesian Analysis*12 (3): 807β29.

*JMLR*, April.

*Proceedings of ICML*.

*IEEE Transactions on Neural Networks*12 (6): 1278β87.

*Third Workshop on Bayesian Deep Learning (NeurIPS 2018), MontrΓ©al, Canada.*, 5.

*arXiv:2105.14594 [Cs, Stat]*, May.

*arXiv:2012.01988 [Cs]*, October.

*ICLR*.

*arXiv:2102.10472 [Cs]*, February.

*arXiv:1306.2759 [Cs, Stat]*, June.

*Bayesian Analysis*13 (3): 917β1007.

## No comments yet. Why not leave one?