The *Bayes-by-backprop* terminology seems to come from Blundell et al. (2015).

## Bayesian learning rule

M. E. Khan and Rue (2022):

We show that a wide-range of well-known learning-algorithms from a variety of fields are all specific instances of a single learning algorithm derived from Bayesian principles. The starting point, is the variational formulation by Zellner (1988), which is an extension of [Eq. 1 to optimize over a well-defined candidate distribution \(q(\boldsymbol{\theta})\), and for which the minimizer \[ q_*(\boldsymbol{\theta})=\underset{q(\boldsymbol{\theta})}{\arg \min } \quad \mathbb{E}_q\left[\sum_{i=1}^N \ell\left(y_i, f_{\boldsymbol{\theta}}\left(\boldsymbol{x}_i\right)\right)\right]+\mathbb{D}_{K L}[q(\boldsymbol{\theta}) \| p(\boldsymbol{\theta})] \] defines a generalized posterior (Bissiri, Holmes, and Walker 2016; Catoni 2007; T. Zhang 1999) in lack of a precise likelihood. The prior distribution is related to the regularizer, \(p(\boldsymbol{\theta}) \propto \exp (-R(\boldsymbol{\theta}))\), and \(\mathbb{D}_{K L}[\cdot \| \cdot]\) is the Kullback-Leibler Divergence (KLD). In the case where \(\exp \left(-\ell\left(y_i, f_{\boldsymbol{\theta}}\left(\boldsymbol{x}_i\right)\right)\right)\) is proportional to the likelihood for \(y_i, \forall i\), then \(q_*(\boldsymbol{\theta})\) is the posterior distribution for \(\boldsymbol{\theta}\) (Zellner 1988)

The result is heavy on natural gradient and exponential families. Also Emti is very charismatic and I defy you to watch his presentation and not feel like this is the One True Way, at least for a few minutes. rPRobably related: Knoblauch, Jewson, and Damoulas (2022).

## SGD as MCMC

Combining Markov Chain Monte Carlo and Stochastic Gradient Descent for in the sense of using SGD to do some cheap approximation to MCMC posterior sampling. Overviews in Ma, Chen, and Fox (2015) and Mandt, Hoffman, and Blei (2017). A lot of probabilistic neural nets leverage this idea.

A related idea is estimating gradients of parameters by Monte Carlo; there is nothing necessarily Bayesian about that *per se*; in that case we are doing a noisy estimate of a deterministic quantity.
In *this* setting we are interested in the noise itself.

I have a vague memory that this argument is leveraged in Neal (1996)? Should check. For sure the version in Mandt, Hoffman, and Blei (2017) is a highly developed and modern take. Basically, they analyse the distribution near convergence as an autoregressive process:

Stochastic Gradient Descent with a constant learning rate (constant SGD) simulates a Markov chain with a stationary distribution. With this perspective, we derive several new results.

- We show that constant SGD can be used as an approximate Bayesian posterior inference algorithm. Specifically, we show how to adjust the tuning parameters of constant SGD to best match the stationary distribution to a posterior, minimizing the Kullback-Leibler divergence between these two distributions.
- We demonstrate that constant SGD gives rise to a new variational EM algorithm that optimizes hyperparameters in complex probabilistic models.
- We also propose SGD with momentum for sampling and show how to adjust the damping coefficient accordingly.
- We analyze MCMC algorithms. For Langevin Dynamics and Stochastic Gradient Fisher Scoring, we quantify the approximation errors due to finite learning rates. Finally,
- we use the stochastic process perspective to give a short proof of why Polyak averaging is optimal. Based on this idea, we propose a scalable approximate MCMC algorithm, the Averaged Stochastic Gradient Sampler.

The article is rather beautiful. Importantly they leverage the assumption that we are sampling from approximately (log-)quadratic posterior modes, which means that we should be suspicious of the method when

- The posterior is not quadratic, i.e. the distribution is not well approximated by a Gaussian at the mode, and
- The same for the tails. If there are low-probability but high importance posterior configurations such that they are not Gaussian in the tails, we should be skeptical that they will be sampled well; I have an intuition that this is a more stringent requirement, but TBH I am not sure of the exact relationship of these two conditions.

The analysis leverages gradient flow, which is a continuous limit of stochastic gradient descent.

## Stochastic Weight Averaging

A popular recent development is the Stochastic Weight Averaging family of methods (Izmailov et al. 2018, 2020; Maddox et al. 2019; Wilson and Izmailov 2020). See Andrew G Wilsonโs web page for a brief description of the sub methods, since he seems to have been involved in all of them.

## Stochastic Gradient Langevin MCMC

โa Markov Chain reminiscent of noisy gradient descentโ (Welling and Teh 2011) extending vanilla Langevin dynamics.

## Stein Variational GD

Perhaps related? An ensemble method. See Stein VGD.

## SG Hamiltonian Monte Carlo

This, surprisingly, works, I am told? T. Chen, Fox, and Guestrin (2014).

## SG thermostats

Some kind of variance control using auxiliary variables? See Ding et al. (2014).

## SG Fisher scoring

See Ahn, Korattikara, and Welling (2012). I assume there is a connection to MC gradients via the score trick?.

## To file

(M. Khan et al. 2018; Osawa et al. 2019; G. Zhang et al. 2018).

## References

*Proceedings of the 29th International Coference on International Conference on Machine Learning*, 1771โ78. ICMLโ12. Madison, WI, USA: Omnipress.

*Proceedings of the 39th International Conference on Machine Learning*, 414โ34. PMLR.

*Journal of the Royal Statistical Society: Series B (Statistical Methodology)*78 (5): 1103โ30.

*Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37*, 1613โ22. ICMLโ15. Lille, France: JMLR.org.

*Machine Learning: Science and Technology*3 (4): 045002.

*Proceedings of the 32nd International Conference on Neural Information Processing Systems*, 8278โ88. NIPSโ18. Red Hook, NY, USA: Curran Associates Inc.

*IMS Lecture Notes Monograph Series*56: 1โ163.

*Mathematics of Computation*91 (335): 1247โ80.

*2018 Information Theory and Applications Workshop (ITA)*, 1โ10.

*Proceedings of the 31st International Conference on Machine Learning*, 1683โ91. Beijing, China: PMLR.

*Annals of Probability*3 (1): 146โ58.

*Proceedings of the 32nd International Conference on Neural Information Processing Systems*, 9187โ97. NIPSโ18. Red Hook, NY, USA: Curran Associates Inc.

*Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2*, 3203โ11. NIPSโ14. Cambridge, MA, USA: MIT Press.

*Communications on Pure and Applied Mathematics*28 (1): 1โ47.

*arXiv:1605.01559 [Math, Stat]*, May.

*arXiv:2105.04504 [Cs, Stat]*.

*Proceedings of the National Academy of Sciences*118 (9): e2015617118.

*arXiv:1710.06595 [Stat]*, October.

*arXiv:1812.00793 [Cs, Math, Stat]*, September.

*Journal of the Royal Statistical Society: Series B (Statistical Methodology)*73 (2): 123โ214.

*Physical Review Letters*118 (1): 010601.

*Journal of the Royal Statistical Society: Series B (Methodological)*56 (4): 549โ81.

*arXiv:1903.12322 [Cs, Stat]*, March.

*International Conference on Artificial Intelligence and Statistics*, 703โ11. PMLR.

*Proceedings of The 35th Uncertainty in Artificial Intelligence Conference*, 1169โ79. PMLR.

*arXiv:1906.01930 [Cs, Stat]*, July.

*Proceedings of the 35th International Conference on Machine Learning*, 2611โ20. PMLR.

*Journal of Machine Learning Research*23 (132): 1โ109.

*Uncertainty in Artificial Intelligence*.

*Advances In Neural Information Processing Systems*.

*Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2*, 2917โ25. NIPSโ15. Cambridge, MA, USA: MIT Press.

*Proceedings of the 19th International Conference on Artificial Intelligence and Statistics*, 1070โ77. arXiv.

*JMLR*, April.

*arXiv:2004.12550 [Stat]*, October.

*Journal of Machine Learning Research*21 (146): 1โ76.

*arXiv:1610.00781 [Math, Stat]*, October.

*Advances in Neural Information Processing Systems*. Vol. 32. Red Hook, NY, USA: Curran Associates, Inc.

*Nuclear Physics B*180 (3): 378โ84.

*Statistics & Probability Letters*182 (March): 109321.

*arXiv:2105.14594 [Cs, Stat]*, May.

*Advances in Neural Information Processing Systems*. Vol. 28. NIPSโ15. Curran Associates, Inc.

*ACM Transactions on Knowledge Discovery from Data*17 (2): 29:1โ37.

*Graphical Models, Exponential Families, and Variational Inference*. Vol. 1. Foundations and Trendsยฎ in Machine Learning. Now Publishers.

*Proceedings of the 28th International Conference on International Conference on Machine Learning*, 681โ88. ICMLโ11. Madison, WI, USA: Omnipress.

*Proceedings of the 37th International Conference on Machine Learning*, 119:10248โ59. PMLR.

*Statistics & Probability Letters*91 (Supplement C): 14โ19.

*The American Statistician*42 (4): 278โ80.

*Journal of Econometrics*, Information and Entropy Econometrics, 107 (1): 41โ50.

*Proceedings of the 35th International Conference on Machine Learning*, 5852โ61. PMLR.

*Proceedings of the Twelfth Annual Conference on Computational Learning Theory*, 156โ63. COLT โ99. New York, NY, USA: Association for Computing Machinery.

*Molecular Physics*116 (21-22): 3214โ23.

## No comments yet. Why not leave one?