**WARNING: more than usually chaotic notes here**

Bayesian inference for massively parameterised networks.

To learn:

- marginal likelihood in model selection: how does it work with many optima?

Closely related: Generative models where we train a process to generate the phenomenon of interest.

## Backgrounders

Radford Neal’s thesis (Neal 1996) is a foundational asymptotically-Bayesian use of neural networks. Yarin Gal’s PhD Thesis (Gal 2016) summarizes some implicit approximate approaches (e.g. the Bayesian interpretation of dropout. Diederik P. Kingma’s thesis is a blockbuster in this tradition.

Alex Graves did a poster of his paper (Graves 2011) of a simplest prior uncertainty thing for recurrent nets - (diagonal Gaussian weight uncertainty) There is a 3rd party quick and dirty implementation.

One could refer to the 2019 NeurIPS Bayes deep learning workshop site which will have some more modern positioning. There was a tutorial in 2020: by Dustin Tran, Jasper Snoek, Balaji Lakshminarayanan: Practical Uncertainty Estimation & Out-of-Distribution Robustness in Deep Learning.

Generative methods are useful here, e.g. the variational autoencoder and affiliated reparameterization trick. Likelihood free methods seems to be in the air too.

## Sampling from

TBD

## Stochastic Gradient Descent

I think this argument is leveraged in Neal (1996). But see the version in Mandt, Hoffman, and Blei (2017) for a highly developed version

Stochastic Gradient Descent with a constant learning rate (constant SGD) simulates a Markov chain with a stationary distribution. With this perspective, we derive several new results.

- We show that constant SGD can be used as an approximate Bayesian posterior inference algorithm. Specifically, we show how to adjust the tuning parameters of constant SGD to best match the stationary distribution to a posterior, minimizing the Kullback-Leibler divergence between these two distributions.
- We demonstrate that constant SGD gives rise to a new variational EM algorithm that optimizes hyperparameters in complex probabilistic models.
- We also propose SGD with momentum for sampling and show how to adjust the damping coefficient accordingly.
- We analyze MCMC algorithms. For Langevin Dynamics and Stochastic Gradient Fisher Scoring, we quantify the approximation errors due to finite learning rates. Finally,
- we use the stochastic process perspective to give a short proof of why Polyak averaging is optimal. Based on this idea, we propose a scalable approximate MCMC algorithm, the Averaged Stochastic Gradient Sampler.

## Via NTK

How does this work? He, Lakshminarayanan, and Teh (2020).

## Ensemble methods

Deep learning has its own twists on model averaging and bagging: Neural ensembles.

## Practicalities

The computational toolsets for “neural” probabilistic programming and vanilla probabilistic programming are converging. See the tool listing under probabilistic programming.

## References

*Advances in Neural Information Processing Systems 30*, edited by I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, 5309–19. Curran Associates, Inc. http://papers.nips.cc/paper/7115-identification-of-gaussian-process-state-space-models.pdf.

*Proceedings of ICLR*. http://arxiv.org/abs/1412.6581.

*4th Workshop on Bayesian Deep Learning (NeurIPS 2019)*, 17.

*Advances in Approximate Bayesian Inference Workshop, NIPS*.

*Advances in Approximate Bayesian Inference Workshop, NIPS*.

*Proceedings of the 33rd International Conference on Machine Learning (ICML-16)*. http://arxiv.org/abs/1506.02142.

*4th International Conference on Learning Representations (ICLR) Workshop Track*. http://arxiv.org/abs/1506.02158.

*IEEE Transactions on Signal Processing*64 (13): 3444–57. https://doi.org/10.1109/TSP.2016.2546221.

*Proceedings of the 24th International Conference on Neural Information Processing Systems*, 2348–56. NIPS’11. USA: Curran Associates Inc. https://papers.nips.cc/paper/4329-practical-variational-inference-for-neural-networks.pdf.

*Advances in Neural Information Processing Systems 28*, edited by C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, 2629–37. Curran Associates, Inc. http://papers.nips.cc/paper/5961-neural-adaptive-sequential-monte-carlo.pdf.

*Advances in Neural Information Processing Systems*. Vol. 33. https://proceedings.neurips.cc//paper_files/paper/2020/hash/0b1ec366924b26fc98fa7b71a9c249cf-Abstract.html.

*Proceedings of ICLR*. http://arxiv.org/abs/1605.06432.

*ICLR 2014 Conference*. http://arxiv.org/abs/1312.6114.

*ICLR*. http://arxiv.org/abs/1711.00165.

*Workshop on Learning to Generate Natural Language*. http://arxiv.org/abs/1708.00077.

*PMLR*, 2218–27. http://proceedings.mlr.press/v70/louizos17a.html.

*Information Theory, Inference & Learning Algorithms*. Cambridge University Press.

*Neural Computation*4 (3): 448–72. https://doi.org/10.1162/neco.1992.4.3.448.

*JMLR*, April. http://arxiv.org/abs/1704.04289.

*Proceedings of ICML*. http://arxiv.org/abs/1701.05369.

*International Conference on Artificial Intelligence and Statistics*, 1126–36. PMLR. http://proceedings.mlr.press/v108/peluchetti20a.html.

*Gaussian Processes for Machine Learning*. Adaptive Computation and Machine Learning. Cambridge, Mass: Max-Planck-Gesellschaft; MIT Press. http://www.gaussianprocess.org/gpml/.

*ICLR*. http://arxiv.org/abs/2002.06715.

## No comments yet. Why not leave one?