Bayesian posterior inference via optimisation

Conditioning by gradient

Randomly exploring the posterior space.

The Bayes-by-backprop terminology seems to come from Blundell et al. (2015).

Bayesian learning rule

M. E. Khan and Rue (2022):

We show that a wide-range of well-known learning-algorithms from a variety of fields are all specific instances of a single learning algorithm derived from Bayesian principles. The starting point, is the variational formulation by Zellner (1988), which is an extension of [Eq. 1 to optimize over a well-defined candidate distribution \(q(\boldsymbol{\theta})\), and for which the minimizer \[ q_*(\boldsymbol{\theta})=\underset{q(\boldsymbol{\theta})}{\arg \min } \quad \mathbb{E}_q\left[\sum_{i=1}^N \ell\left(y_i, f_{\boldsymbol{\theta}}\left(\boldsymbol{x}_i\right)\right)\right]+\mathbb{D}_{K L}[q(\boldsymbol{\theta}) \| p(\boldsymbol{\theta})] \] defines a generalized posterior (Bissiri, Holmes, and Walker 2016; Catoni 2007; T. Zhang 1999) in lack of a precise likelihood. The prior distribution is related to the regularizer, \(p(\boldsymbol{\theta}) \propto \exp (-R(\boldsymbol{\theta}))\), and \(\mathbb{D}_{K L}[\cdot \| \cdot]\) is the Kullback-Leibler Divergence (KLD). In the case where \(\exp \left(-\ell\left(y_i, f_{\boldsymbol{\theta}}\left(\boldsymbol{x}_i\right)\right)\right)\) is proportional to the likelihood for \(y_i, \forall i\), then \(q_*(\boldsymbol{\theta})\) is the posterior distribution for \(\boldsymbol{\theta}\) (Zellner 1988)

The result is heavy on natural gradient and exponential families. Also Emti is very charismatic and I defy you to watch his presentation and not feel like this is the One True Way, at least for a few minutes. rPRobably related: Knoblauch, Jewson, and Damoulas (2022).


Combining Markov Chain Monte Carlo and Stochastic Gradient Descent for in the sense of using SGD to do some cheap approximation to MCMC posterior sampling. Overviews in Ma, Chen, and Fox (2015) and Mandt, Hoffman, and Blei (2017). A lot of probabilistic neural nets leverage this idea.

A related idea is estimating gradients of parameters by Monte Carlo; there is nothing necessarily Bayesian about that per se; in that case we are doing a noisy estimate of a deterministic quantity. In this setting we are interested in the noise itself.

I have a vague memory that this argument is leveraged in Neal (1996)? Should check. For sure the version in Mandt, Hoffman, and Blei (2017) is a highly developed and modern take. Basically, they analyse the distribution near convergence as an autoregressive process:

Stochastic Gradient Descent with a constant learning rate (constant SGD) simulates a Markov chain with a stationary distribution. With this perspective, we derive several new results.

  1. We show that constant SGD can be used as an approximate Bayesian posterior inference algorithm. Specifically, we show how to adjust the tuning parameters of constant SGD to best match the stationary distribution to a posterior, minimizing the Kullback-Leibler divergence between these two distributions.
  2. We demonstrate that constant SGD gives rise to a new variational EM algorithm that optimizes hyperparameters in complex probabilistic models.
  3. We also propose SGD with momentum for sampling and show how to adjust the damping coefficient accordingly.
  4. We analyze MCMC algorithms. For Langevin Dynamics and Stochastic Gradient Fisher Scoring, we quantify the approximation errors due to finite learning rates. Finally,
  5. we use the stochastic process perspective to give a short proof of why Polyak averaging is optimal. Based on this idea, we propose a scalable approximate MCMC algorithm, the Averaged Stochastic Gradient Sampler.

The article is rather beautiful. Importantly they leverage the assumption that we are sampling from approximately (log-)quadratic posterior modes, which means that we should be suspicious of the method when

  1. The posterior is not quadratic, i.e. the distribution is not well approximated by a Gaussian at the mode, and
  2. The same for the tails. If there are low-probability but high importance posterior configurations such that they are not Gaussian in the tails, we should be skeptical that they will be sampled well; I have an intuition that this is a more stringent requirement, but TBH I am not sure of the exact relationship of these two conditions.

The analysis leverages gradient flow, which is a continuous limit of stochastic gradient descent.

Stochastic Weight Averaging

A popular recent development is the Stochastic Weight Averaging family of methods (Izmailov et al. 2018, 2020; Maddox et al. 2019; Wilson and Izmailov 2020). See Andrew G Wilsonโ€™s web page for a brief description of the sub methods, since he seems to have been involved in all of them.

Stochastic Gradient Langevin MCMC

โ€œa Markov Chain reminiscent of noisy gradient descentโ€ (Welling and Teh 2011) extending vanilla Langevin dynamics.

Stein Variational GD

Perhaps related? An ensemble method. See Stein VGD.

SG Hamiltonian Monte Carlo

This, surprisingly, works, I am told? T. Chen, Fox, and Guestrin (2014).

SG thermostats

Some kind of variance control using auxiliary variables? See Ding et al. (2014).

SG Fisher scoring

See Ahn, Korattikara, and Welling (2012). I assume there is a connection to MC gradients via the score trick?.


Ahn, Sungjin, Anoop Korattikara, and Max Welling. 2012. โ€œBayesian Posterior Sampling via Stochastic Gradient Fisher Scoring.โ€ In Proceedings of the 29th International Coference on International Conference on Machine Learning, 1771โ€“78. ICMLโ€™12. Madison, WI, USA: Omnipress.
Alexos, Antonios, Alex J. Boyd, and Stephan Mandt. 2022. โ€œStructured Stochastic Gradient MCMC.โ€ In Proceedings of the 39th International Conference on Machine Learning, 414โ€“34. PMLR.
Bissiri, P. G., C. C. Holmes, and S. G. Walker. 2016. โ€œA General Framework for Updating Belief Distributions.โ€ Journal of the Royal Statistical Society: Series B (Statistical Methodology) 78 (5): 1103โ€“30.
Blundell, Charles, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. 2015. โ€œWeight Uncertainty in Neural Networks.โ€ In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, 1613โ€“22. ICMLโ€™15. Lille, France:
Bradley, Arwen V., Carlos A. Gomez-Uribe, and Manish Reddy Vuyyuru. 2022. โ€œShift-Curvature, SGD, and Generalization.โ€ Machine Learning: Science and Technology 3 (4): 045002.
Brosse, Nicolas, ร‰ric Moulines, and Alain Durmus. 2018. โ€œThe Promises and Pitfalls of Stochastic Gradient Langevin Dynamics.โ€ In Proceedings of the 32nd International Conference on Neural Information Processing Systems, 8278โ€“88. NIPSโ€™18. Red Hook, NY, USA: Curran Associates Inc.
Catoni, Olivier. 2007. โ€œPAC-Bayesian Supervised Classification: The Thermodynamics of Statistical Learning.โ€ IMS Lecture Notes Monograph Series 56: 1โ€“163.
Chada, Neil, and Xin Tong. 2022. โ€œConvergence Acceleration of Ensemble Kalman Inversion in Nonlinear Settings.โ€ Mathematics of Computation 91 (335): 1247โ€“80.
Chandramoorthy, Nisha, Andreas Loukas, Khashayar Gatmiry, and Stefanie Jegelka. 2022. โ€œOn the Generalization of Learning Algorithms That Do Not Converge.โ€ arXiv.
Chaudhari, Pratik, Anna Choromanska, Stefano Soatto, Yann LeCun, Carlo Baldassi, Christian Borgs, Jennifer Chayes, Levent Sagun, and Riccardo Zecchina. 2017. โ€œEntropy-SGD: Biasing Gradient Descent Into Wide Valleys.โ€ arXiv.
Chaudhari, Pratik, and Stefano Soatto. 2018. โ€œStochastic Gradient Descent Performs Variational Inference, Converges to Limit Cycles for Deep Networks.โ€ In 2018 Information Theory and Applications Workshop (ITA), 1โ€“10.
Chen, Tianqi, Emily Fox, and Carlos Guestrin. 2014. โ€œStochastic Gradient Hamiltonian Monte Carlo.โ€ In Proceedings of the 31st International Conference on Machine Learning, 1683โ€“91. Beijing, China: PMLR.
Chen, Zaiwei, Shancong Mou, and Siva Theja Maguluri. 2021. โ€œStationary Behavior of Constant Stepsize SGD Type Algorithms: An Asymptotic Characterization.โ€ arXiv.
Choi, Hyunsun, Eric Jang, and Alexander A. Alemi. 2019. โ€œWAIC, but Why? Generative Ensembles for Robust Anomaly Detection.โ€ arXiv.
Csiszar, I. 1975. โ€œI-Divergence Geometry of Probability Distributions and Minimization Problems.โ€ Annals of Probability 3 (1): 146โ€“58.
Detommaso, Gianluca, Tiangang Cui, Alessio Spantini, Youssef Marzouk, and Robert Scheichl. 2018. โ€œA Stein Variational Newton Method.โ€ In Proceedings of the 32nd International Conference on Neural Information Processing Systems, 9187โ€“97. NIPSโ€™18. Red Hook, NY, USA: Curran Associates Inc.
Dieuleveut, Aymeric, Alain Durmus, and Francis Bach. 2018. โ€œBridging the Gap Between Constant Step Size Stochastic Gradient Descent and Markov Chains.โ€ arXiv.
Ding, Nan, Youhan Fang, Ryan Babbush, Changyou Chen, Robert D. Skeel, and Hartmut Neven. 2014. โ€œBayesian Sampling Using Stochastic Gradient Thermostats.โ€ In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, 3203โ€“11. NIPSโ€™14. Cambridge, MA, USA: MIT Press.
Donsker, M. D., and S. R. S. Varadhan. 1975. โ€œAsymptotic Evaluation of Certain Markov Process Expectations for Large Time, I.โ€ Communications on Pure and Applied Mathematics 28 (1): 1โ€“47.
Durmus, Alain, and Eric Moulines. 2016. โ€œHigh-Dimensional Bayesian Inference via the Unadjusted Langevin Algorithm.โ€ arXiv:1605.01559 [Math, Stat], May.
Dutordoir, Vincent, James Hensman, Mark van der Wilk, Carl Henrik Ek, Zoubin Ghahramani, and Nicolas Durrande. 2021. โ€œDeep Neural Networks as Point Estimates for Deep Gaussian Processes.โ€ In arXiv:2105.04504 [Cs, Stat].
Feng, Yu, and Yuhai Tu. 2021. โ€œThe Inverse Varianceโ€“Flatness Relation in Stochastic Gradient Descent Is Critical for Finding Flat Minima.โ€ Proceedings of the National Academy of Sciences 118 (9): e2015617118.
Futami, Futoshi, Issei Sato, and Masashi Sugiyama. 2017. โ€œVariational Inference Based on Robust Divergences.โ€ arXiv:1710.06595 [Stat], October.
Ge, Rong, Holden Lee, and Andrej Risteski. 2020. โ€œSimulated Tempering Langevin Monte Carlo II: An Improved Proof Using Soft Markov Chain Decomposition.โ€ arXiv:1812.00793 [Cs, Math, Stat], September.
Girolami, Mark, and Ben Calderhead. 2011. โ€œRiemann Manifold Langevin and Hamiltonian Monte Carlo Methods.โ€ Journal of the Royal Statistical Society: Series B (Statistical Methodology) 73 (2): 123โ€“214.
Goldt, Sebastian, and Udo Seifert. 2017. โ€œStochastic Thermodynamics of Learning.โ€ Physical Review Letters 118 (1): 010601.
Grenander, Ulf, and Michael I. Miller. 1994. โ€œRepresentations of Knowledge in Complex Systems.โ€ Journal of the Royal Statistical Society: Series B (Methodological) 56 (4): 549โ€“81.
Hodgkinson, Liam, Robert Salomone, and Fred Roosta. 2019. โ€œImplicit Langevin Algorithms for Sampling From Log-Concave Densities.โ€ arXiv:1903.12322 [Cs, Stat], March.
Immer, Alexander, Maciej Korzepa, and Matthias Bauer. 2021. โ€œImproving Predictions of Bayesian Neural Nets via Local Linearization.โ€ In International Conference on Artificial Intelligence and Statistics, 703โ€“11. PMLR.
Izmailov, Pavel, Wesley J. Maddox, Polina Kirichenko, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. 2020. โ€œSubspace Inference for Bayesian Deep Learning.โ€ In Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, 1169โ€“79. PMLR.
Izmailov, Pavel, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. 2018. โ€œAveraging Weights Leads to Wider Optima and Better Generalization,โ€ March.
Khan, Mohammad Emtiyaz, Alexander Immer, Ehsan Abedi, and Maciej Korzepa. 2020. โ€œApproximate Inference Turns Deep Networks into Gaussian Processes.โ€ arXiv:1906.01930 [Cs, Stat], July.
Khan, Mohammad Emtiyaz, and Hรฅvard Rue. 2022. โ€œThe Bayesian Learning Rule.โ€ arXiv.
Khan, Mohammad, Didrik Nielsen, Voot Tangkaratt, Wu Lin, Yarin Gal, and Akash Srivastava. 2018. โ€œFast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam.โ€ In Proceedings of the 35th International Conference on Machine Learning, 2611โ€“20. PMLR.
Knoblauch, Jeremias, Jack Jewson, and Theodoros Damoulas. 2022. โ€œAn Optimization-Centric View on Bayesโ€™ Rule: Reviewing and Generalizing Variational Inference.โ€ Journal of Machine Learning Research 23 (132): 1โ€“109.
Kristiadi, Agustinus, Matthias Hein, and Philipp Hennig. 2021. โ€œLearnable Uncertainty Under Laplace Approximations.โ€ In Uncertainty in Artificial Intelligence.
Le, Samuel L. Smith and Quoc V. 2018. โ€œA Bayesian Perspective on Generalization and Stochastic Gradient Descent.โ€ In.
Liu, Qiang, and Dilin Wang. 2019. โ€œStein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm.โ€ In Advances In Neural Information Processing Systems.
Ma, Yi-An, Tianqi Chen, and Emily B. Fox. 2015. โ€œA Complete Recipe for Stochastic Gradient MCMC.โ€ In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2, 2917โ€“25. NIPSโ€™15. Cambridge, MA, USA: MIT Press.
Maclaurin, Dougal, David Duvenaud, and Ryan P. Adams. 2015. โ€œEarly Stopping as Nonparametric Variational Inference.โ€ In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, 1070โ€“77. arXiv.
Maddox, Wesley, Timur Garipov, Pavel Izmailov, Dmitry Vetrov, and Andrew Gordon Wilson. 2019. โ€œA Simple Baseline for Bayesian Uncertainty in Deep Learning,โ€ February.
Mandt, Stephan, Matthew D. Hoffman, and David M. Blei. 2017. โ€œStochastic Gradient Descent as Approximate Bayesian Inference.โ€ JMLR, April.
Margossian, Charles C., Aki Vehtari, Daniel Simpson, and Raj Agrawal. 2020. โ€œHamiltonian Monte Carlo Using an Adjoint-Differentiated Laplace Approximation: Bayesian Inference for Latent Gaussian Models and Beyond.โ€ arXiv:2004.12550 [Stat], October.
Martens, James. 2020. โ€œNew Insights and Perspectives on the Natural Gradient Method.โ€ Journal of Machine Learning Research 21 (146): 1โ€“76.
Neal, Radford M. 1996. โ€œBayesian Learning for Neural Networks.โ€ Secaucus, NJ, USA: Springer-Verlag New York, Inc.
Norton, Richard A., and Colin Fox. 2016. โ€œTuning of MCMC with Langevin, Hamiltonian, and Other Stochastic Autoregressive Proposals.โ€ arXiv:1610.00781 [Math, Stat], October.
Osawa, Kazuki, Siddharth Swaroop, Mohammad Emtiyaz E Khan, Anirudh Jain, Runa Eschenhagen, Richard E Turner, and Rio Yokota. 2019. โ€œPractical Deep Learning with Bayesian Principles.โ€ In Advances in Neural Information Processing Systems. Vol. 32. Red Hook, NY, USA: Curran Associates, Inc.
Parisi, G. 1981. โ€œCorrelation Functions and Computer Simulations.โ€ Nuclear Physics B 180 (3): 378โ€“84.
Rรกsonyi, Miklรณs, and Kinga Tikosi. 2022. โ€œOn the Stability of the Stochastic Gradient Langevin Algorithm with Dependent Data Stream.โ€ Statistics & Probability Letters 182 (March): 109321.
Ritter, Hippolyt, Martin Kukla, Cheng Zhang, and Yingzhen Li. 2021. โ€œSparse Uncertainty Representation in Deep Learning with Inducing Weights.โ€ arXiv:2105.14594 [Cs, Stat], May.
Shang, Xiaocheng, Zhanxing Zhu, Benedict Leimkuhler, and Amos J Storkey. 2015. โ€œCovariance-Controlled Adaptive Langevin Thermostat for Large-Scale Bayesian Sampling.โ€ In Advances in Neural Information Processing Systems. Vol. 28. NIPSโ€™15. Curran Associates, Inc.
Smith, Samuel L., Benoit Dherin, David Barrett, and Soham De. 2020. โ€œOn the Origin of Implicit Regularization in Stochastic Gradient Descent.โ€ In.
Sun, Jianhui, Ying Yang, Guangxu Xun, and Aidong Zhang. 2023. โ€œScheduling Hyperparameters to Improve Generalization: From Centralized SGD to Asynchronous SGD.โ€ ACM Transactions on Knowledge Discovery from Data 17 (2): 29:1โ€“37.
Wainwright, Martin J., and Michael I. Jordan. 2008. Graphical Models, Exponential Families, and Variational Inference. Vol. 1. Foundations and Trendsยฎ in Machine Learning. Now Publishers.
Welling, Max, and Yee Whye Teh. 2011. โ€œBayesian Learning via Stochastic Gradient Langevin Dynamics.โ€ In Proceedings of the 28th International Conference on International Conference on Machine Learning, 681โ€“88. ICMLโ€™11. Madison, WI, USA: Omnipress.
Wenzel, Florian, Kevin Roth, Bastiaan Veeling, Jakub Swiatkowski, Linh Tran, Stephan Mandt, Jasper Snoek, Tim Salimans, Rodolphe Jenatton, and Sebastian Nowozin. 2020. โ€œHow Good Is the Bayes Posterior in Deep Neural Networks Really?โ€ In Proceedings of the 37th International Conference on Machine Learning, 119:10248โ€“59. PMLR.
Wilson, Andrew Gordon, and Pavel Izmailov. 2020. โ€œBayesian Deep Learning and a Probabilistic Perspective of Generalization,โ€ February.
Xifara, T., C. Sherlock, S. Livingstone, S. Byrne, and M. Girolami. 2014. โ€œLangevin Diffusions and the Metropolis-Adjusted Langevin Algorithm.โ€ Statistics & Probability Letters 91 (Supplement C): 14โ€“19.
Zellner, Arnold. 1988. โ€œOptimal Information Processing and Bayesโ€™s Theorem.โ€ The American Statistician 42 (4): 278โ€“80.
โ€”โ€”โ€”. 2002. โ€œInformation Processing and Bayesian Analysis.โ€ Journal of Econometrics, Information and Entropy Econometrics, 107 (1): 41โ€“50.
Zhang, Guodong, Shengyang Sun, David Duvenaud, and Roger Grosse. 2018. โ€œNoisy Natural Gradient as Variational Inference.โ€ In Proceedings of the 35th International Conference on Machine Learning, 5852โ€“61. PMLR.
Zhang, Tong. 1999. โ€œTheoretical Analysis of a Class of Randomized Regularization Methods.โ€ In Proceedings of the Twelfth Annual Conference on Computational Learning Theory, 156โ€“63. COLT โ€™99. New York, NY, USA: Association for Computing Machinery.
Zhang, Yao, Andrew M. Saxe, Madhu S. Advani, and Alpha A. Lee. 2018. โ€œEnergy-Entropy Competition and the Effectiveness of Stochastic Gradient Descent in Machine Learning.โ€ Molecular Physics 116 (21-22): 3214โ€“23.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.