Bayesian posterior inference via optimization

Conditioning by gradient

August 17, 2020 — April 4, 2024

Bayes
estimator distribution
functional analysis
Markov processes
Monte Carlo
neural nets
optimization
probabilistic algorithms
probability
SDEs
stochastic processes
Figure 1: Randomly exploring the posterior space.

The Bayes-by-backprop terminology seems to come from Blundell et al. (2015). I’m not sure if there is a single idea here so much as a family of ideas that use SDE representations of SGD to sample from a posterior distribution.

1 Bayesian learning rule

M. E. Khan and Rue (2024):

We show that a wide range of well-known learning algorithms from a variety of fields are all specific instances of a single learning algorithm derived from Bayesian principles. The starting point is the variational formulation by Zellner (1988), which is an extension of Eq. 1 to optimize over a well-defined candidate distribution \(q(\boldsymbol{\theta})\), and for which the minimizer \[ q_*(\boldsymbol{\theta})=\underset{q(\boldsymbol{\theta})}{\arg \min } \quad \mathbb{E}_q\left[\sum_{i=1}^N \ell\left(y_i, f_{\boldsymbol{\theta}}\left(\boldsymbol{x}_i\right)\right)\right]+\mathbb{D}_{K L}[q(\boldsymbol{\theta}) \| p(\boldsymbol{\theta})] \] defines a generalized posterior (Bissiri, Holmes, and Walker 2016; Catoni 2007; T. Zhang 1999) in lack of a precise likelihood. The prior distribution is related to the regularizer, \(p(\boldsymbol{\theta}) \propto \exp (-R(\boldsymbol{\theta}))\), and \(\mathbb{D}_{K L}[\cdot \| \cdot]\) is the Kullback-Leibler Divergence (KLD). In the case where \(\exp \left(-\ell\left(y_i, f_{\boldsymbol{\theta}}\left(\boldsymbol{x}_i\right)\right)\right)\) is proportional to the likelihood for \(y_i, \forall i\), then \(q_*(\boldsymbol{\theta})\) is the posterior distribution for \(\boldsymbol{\theta}\) (Zellner 1988).

The result is heavy on natural gradient and exponential families to tweak Adam to be a Bayesian posterior sampler. Probably related: Knoblauch, Jewson, and Damoulas (2022).

2 SGD as MCMC

Combining Markov Chain Monte Carlo and Stochastic Gradient Descent in the sense of using SGD to do some cheap approximation to MCMC posterior sampling. Overviews in Ma, Chen, and Fox (2015) and Mandt, Hoffman, and Blei (2017). A lot of probabilistic neural nets leverage this idea.

A related idea is estimating gradients of parameters by Monte Carlo; there is nothing necessarily Bayesian about that per se; in that case, we are doing a noisy estimate of a deterministic quantity. In this setting we are interested in the noise itself.

I have a vague memory that this argument is leveraged in Neal (1996)? Should check. For sure the version in Mandt, Hoffman, and Blei (2017) is a highly developed and modern take. Basically, they analyse the distribution near convergence as an autoregressive process:

Stochastic Gradient Descent with a constant learning rate (constant SGD) simulates a Markov chain with a stationary distribution. With this perspective, we derive several new results.

  1. We show that constant SGD can be used as an approximate Bayesian posterior inference algorithm. Specifically, we show how to adjust the tuning parameters of constant SGD to best match the stationary distribution to a posterior, minimizing the Kullback-Leibler divergence between these two distributions.
  2. We demonstrate that constant SGD gives rise to a new variational EM algorithm that optimizes hyperparameters in complex probabilistic models.
  3. We also propose SGD with momentum for sampling and show how to adjust the damping coefficient accordingly.
  4. We analyse MCMC algorithms. For Langevin Dynamics and Stochastic Gradient Fisher Scoring, we quantify the approximation errors due to finite learning rates. Finally,
  5. we use the stochastic process perspective to give a short proof of why Polyak averaging is optimal. Based on this idea, we propose a scalable approximate MCMC algorithm, the Averaged Stochastic Gradient Sampler.

The article is rather beautiful. Importantly they leverage the assumption that we are sampling from approximately (log-)quadratic posterior modes, which means that we should be suspicious of the method when

  1. The posterior is not quadratic, i.e. the distribution is not well approximated by a Gaussian at the mode, and
  2. The same for the tails. If there are low-probability but high-importance posterior configurations such that they are not Gaussian in the tails, we should be skeptical that they will be sampled well; I have an intuition that this is a more stringent requirement, but TBH I am not sure of the exact relationship of these two conditions.

The analysis leverages gradient flow, which is a continuous limit of stochastic gradient descent.

3 Stochastic Weight Averaging

A popular recent development is the Stochastic Weight Averaging family of methods (Izmailov et al. 2018, 2020; Maddox et al. 2019; Wilson and Izmailov 2020). See Andrew G Wilson’s web page for a brief description of the sub methods, since he seems to have been involved in all of them.

4 Stochastic Gradient Langevin MCMC

a Markov Chain reminiscent of noisy gradient descent(Welling and Teh 2011) extending vanilla Langevin dynamics.

5 Stein Variational GD

Perhaps related? An ensemble method. See Stein VGD.

6 SG Hamiltonian Monte Carlo

This, surprisingly, works, I am told? T. Chen, Fox, and Guestrin (2014).

7 SG thermostats

Some kind of variance control using auxiliary variables? See Ding et al. (2014).

8 SG Fisher scoring

See Ahn, Korattikara, and Welling (2012). I assume there is a connection to MC gradients via the score trick?.

9 Incoming

(M. Khan et al. 2018; Osawa et al. 2019; G. Zhang et al. 2018).

Knoblauch, Jewson, and Damoulas (2019):

We advocate an optimization-centric view on and introduce a novel generalization of Bayesian inference. Our inspiration is the representation of Bayes’ rule as infinite-dimensional optimization problem (Csiszár 1975; Donsker and Varadhan 1975; Zellner 1988). First, we use it to prove an optimality result of standard Variational Inference (VI): Under the proposed view, the standard Evidence Lower Bound (ELBO) maximizing VI posterior is preferable to alternative approximations of the Bayesian posterior. Next, we argue for generalizing standard Bayesian inference. The need for this arises in situations of severe misalignment between reality and three assumptions underlying standard Bayesian inference: (1) Well-specified priors, (2) well-specified likelihoods, (3) the availability of infinite computing power. Our generalization addresses these shortcomings with three arguments and is called the Rule of Three (RoT). We derive it axiomatically and recover existing posteriors as special cases, including the Bayesian posterior and its approximation by standard VI. In contrast, approximations based on alternative ELBO-like objectives violate the axioms. Finally, we study a special case of the RoT that we call Generalized Variational Inference (GVI). GVI posteriors are a large and tractable family of belief distributions specified by three arguments: A loss, a divergence and a variational family. GVI posteriors have appealing properties, including consistency and an interpretation as approximate ELBO. The last part of the paper explores some attractive applications of GVI in popular machine learning models, including robustness and more appropriate marginals. After deriving black box inference schemes for GVI posteriors, their predictive performance is investigated on Bayesian Neural Networks and Deep Gaussian Processes, where GVI can comprehensively improve upon existing methods.

Connection to Gibbs posteriors?

10 References

Ahn, Korattikara, and Welling. 2012. Bayesian Posterior Sampling via Stochastic Gradient Fisher Scoring.” In Proceedings of the 29th International Coference on International Conference on Machine Learning. ICML’12.
Alexos, Boyd, and Mandt. 2022. Structured Stochastic Gradient MCMC.” In Proceedings of the 39th International Conference on Machine Learning.
Bissiri, Holmes, and Walker. 2016. A General Framework for Updating Belief Distributions.” Journal of the Royal Statistical Society: Series B (Statistical Methodology).
Blundell, Cornebise, Kavukcuoglu, et al. 2015. Weight Uncertainty in Neural Networks.” In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37. ICML’15.
Bradley, Gomez-Uribe, and Vuyyuru. 2022. Shift-Curvature, SGD, and Generalization.” Machine Learning: Science and Technology.
Brosse, Moulines, and Durmus. 2018. The Promises and Pitfalls of Stochastic Gradient Langevin Dynamics.” In Proceedings of the 32nd International Conference on Neural Information Processing Systems. NIPS’18.
Catoni. 2007. PAC-Bayesian Supervised Classification: The Thermodynamics of Statistical Learning.” IMS Lecture Notes Monograph Series.
Chada, and Tong. 2022. Convergence Acceleration of Ensemble Kalman Inversion in Nonlinear Settings.” Mathematics of Computation.
Chandramoorthy, Loukas, Gatmiry, et al. 2022. On the Generalization of Learning Algorithms That Do Not Converge.”
Chaudhari, Choromanska, Soatto, et al. 2017. Entropy-SGD: Biasing Gradient Descent Into Wide Valleys.”
Chaudhari, and Soatto. 2018. Stochastic Gradient Descent Performs Variational Inference, Converges to Limit Cycles for Deep Networks.” In 2018 Information Theory and Applications Workshop (ITA).
Chen, Tianqi, Fox, and Guestrin. 2014. Stochastic Gradient Hamiltonian Monte Carlo.” In Proceedings of the 31st International Conference on Machine Learning.
Chen, Zaiwei, Mou, and Maguluri. 2021. Stationary Behavior of Constant Stepsize SGD Type Algorithms: An Asymptotic Characterization.”
Choi, Jang, and Alemi. 2019. WAIC, but Why? Generative Ensembles for Robust Anomaly Detection.”
Csiszár. 1975. I-Divergence Geometry of Probability Distributions and Minimization Problems.” The Annals of Probability.
Dehaene. 2016. Expectation Propagation Performs a Smoothed Gradient Descent.” arXiv:1612.05053 [Stat].
Detommaso, Cui, Spantini, et al. 2018. A Stein Variational Newton Method.” In Proceedings of the 32nd International Conference on Neural Information Processing Systems. NIPS’18.
Dieuleveut, Durmus, and Bach. 2018. Bridging the Gap Between Constant Step Size Stochastic Gradient Descent and Markov Chains.”
Ding, Fang, Babbush, et al. 2014. Bayesian Sampling Using Stochastic Gradient Thermostats.” In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2. NIPS’14.
Donsker, and Varadhan. 1975. Asymptotic Evaluation of Certain Markov Process Expectations for Large Time, I.” Communications on Pure and Applied Mathematics.
Durmus, and Moulines. 2016. High-Dimensional Bayesian Inference via the Unadjusted Langevin Algorithm.” arXiv:1605.01559 [Math, Stat].
Dutordoir, Hensman, van der Wilk, et al. 2021. Deep Neural Networks as Point Estimates for Deep Gaussian Processes.” In arXiv:2105.04504 [Cs, Stat].
Feng, and Tu. 2021. The Inverse Variance–Flatness Relation in Stochastic Gradient Descent Is Critical for Finding Flat Minima.” Proceedings of the National Academy of Sciences.
Futami, Sato, and Sugiyama. 2017. Variational Inference Based on Robust Divergences.” arXiv:1710.06595 [Stat].
Ge, Lee, and Risteski. 2020. Simulated Tempering Langevin Monte Carlo II: An Improved Proof Using Soft Markov Chain Decomposition.” arXiv:1812.00793 [Cs, Math, Stat].
Girolami, and Calderhead. 2011. Riemann Manifold Langevin and Hamiltonian Monte Carlo Methods.” Journal of the Royal Statistical Society: Series B (Statistical Methodology).
Goldt, and Seifert. 2017. Stochastic Thermodynamics of Learning.” Physical Review Letters.
Grenander, and Miller. 1994. Representations of Knowledge in Complex Systems.” Journal of the Royal Statistical Society: Series B (Methodological).
Gu, Levine, Sutskever, et al. 2016. MuProp: Unbiased Backpropagation for Stochastic Neural Networks.” In Proceedings of ICLR.
Hodgkinson, Salomone, and Roosta. 2019. Implicit Langevin Algorithms for Sampling From Log-Concave Densities.” arXiv:1903.12322 [Cs, Stat].
Honkela, Tornio, Raiko, et al. 2008. Natural Conjugate Gradient in Variational Inference.” In Neural Information Processing. Lecture Notes in Computer Science.
Immer, Korzepa, and Bauer. 2021. Improving Predictions of Bayesian Neural Nets via Local Linearization.” In International Conference on Artificial Intelligence and Statistics.
Izmailov, Maddox, Kirichenko, et al. 2020. Subspace Inference for Bayesian Deep Learning.” In Proceedings of The 35th Uncertainty in Artificial Intelligence Conference.
Izmailov, Podoprikhin, Garipov, et al. 2018. Averaging Weights Leads to Wider Optima and Better Generalization.”
Khan, Mohammad Emtiyaz, Immer, Abedi, et al. 2020. Approximate Inference Turns Deep Networks into Gaussian Processes.” arXiv:1906.01930 [Cs, Stat].
Khan, Mohammad Emtiyaz, and Lin. 2017. Conjugate-Computation Variational Inference : Converting Variational Inference in Non-Conjugate Models to Inferences in Conjugate Models.” In Artificial Intelligence and Statistics.
Khan, Mohammad, Nielsen, Tangkaratt, et al. 2018. Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam.” In Proceedings of the 35th International Conference on Machine Learning.
Khan, Mohammad Emtiyaz, and Rue. 2024. The Bayesian Learning Rule.”
Knoblauch, Jewson, and Damoulas. 2019. Generalized Variational Inference: Three Arguments for Deriving New Posteriors.”
———. 2022. “An Optimization-Centric View on Bayes’ Rule: Reviewing and Generalizing Variational Inference.” Journal of Machine Learning Research.
Kristiadi, Hein, and Hennig. 2021. Learnable Uncertainty Under Laplace Approximations.” In Uncertainty in Artificial Intelligence.
Le. 2018. A Bayesian Perspective on Generalization and Stochastic Gradient Descent.” In.
Lin, Dangel, Eschenhagen, et al. 2024. Can We Remove the Square-Root in Adaptive Gradient Methods? A Second-Order Perspective.”
Liu, and Wang. 2019. Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm.” In Advances In Neural Information Processing Systems.
Ma, Chen, and Fox. 2015. A Complete Recipe for Stochastic Gradient MCMC.” In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2. NIPS’15.
Maclaurin, Duvenaud, and Adams. 2015. Early Stopping as Nonparametric Variational Inference.” In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics.
Maddox, Garipov, Izmailov, et al. 2019. A Simple Baseline for Bayesian Uncertainty in Deep Learning.”
Mandt, Hoffman, and Blei. 2017. Stochastic Gradient Descent as Approximate Bayesian Inference.” JMLR.
Margossian, Vehtari, Simpson, et al. 2020. Hamiltonian Monte Carlo Using an Adjoint-Differentiated Laplace Approximation: Bayesian Inference for Latent Gaussian Models and Beyond.” arXiv:2004.12550 [Stat].
Martens. 2020. New Insights and Perspectives on the Natural Gradient Method.” Journal of Machine Learning Research.
Matsubara, Knoblauch, Briol, et al. 2022. Robust Generalised Bayesian Inference for Intractable Likelihoods.” Journal of the Royal Statistical Society Series B: Statistical Methodology.
Neal. 1996. Bayesian Learning for Neural Networks.”
Norton, and Fox. 2016. Tuning of MCMC with Langevin, Hamiltonian, and Other Stochastic Autoregressive Proposals.” arXiv:1610.00781 [Math, Stat].
Opper, and Archambeau. 2009. The Variational Gaussian Approximation Revisited.” Neural Computation.
Osawa, Swaroop, Khan, et al. 2019. Practical Deep Learning with Bayesian Principles.” In Advances in Neural Information Processing Systems.
Papamarkou, Skoularidou, Palla, et al. 2024. Position Paper: Bayesian Deep Learning in the Age of Large-Scale AI.”
Parisi. 1981. Correlation Functions and Computer Simulations.” Nuclear Physics B.
Rásonyi, and Tikosi. 2022. On the Stability of the Stochastic Gradient Langevin Algorithm with Dependent Data Stream.” Statistics & Probability Letters.
Rezende, Mohamed, and Wierstra. 2015. Stochastic Backpropagation and Approximate Inference in Deep Generative Models.” In Proceedings of ICML.
Ritter, Kukla, Zhang, et al. 2021. Sparse Uncertainty Representation in Deep Learning with Inducing Weights.” arXiv:2105.14594 [Cs, Stat].
Ruiz, Titsias, and Blei. 2016. The Generalized Reparameterization Gradient.” In Advances In Neural Information Processing Systems.
Sato. 2001. Online Model Selection Based on the Variational Bayes.” Neural Computation.
Shang, Zhu, Leimkuhler, et al. 2015. Covariance-Controlled Adaptive Langevin Thermostat for Large-Scale Bayesian Sampling.” In Advances in Neural Information Processing Systems. NIPS’15.
Smith, Dherin, Barrett, et al. 2020. On the Origin of Implicit Regularization in Stochastic Gradient Descent.” In.
Sun, Yang, Xun, et al. 2023. Scheduling Hyperparameters to Improve Generalization: From Centralized SGD to Asynchronous SGD.” ACM Transactions on Knowledge Discovery from Data.
Wainwright, and Jordan. 2008. Graphical Models, Exponential Families, and Variational Inference. Foundations and Trends® in Machine Learning.
Welling, and Teh. 2011. Bayesian Learning via Stochastic Gradient Langevin Dynamics.” In Proceedings of the 28th International Conference on International Conference on Machine Learning. ICML’11.
Wenzel, Roth, Veeling, et al. 2020. How Good Is the Bayes Posterior in Deep Neural Networks Really? In Proceedings of the 37th International Conference on Machine Learning.
Wilson, and Izmailov. 2020. Bayesian Deep Learning and a Probabilistic Perspective of Generalization.”
Xifara, Sherlock, Livingstone, et al. 2014. Langevin Diffusions and the Metropolis-Adjusted Langevin Algorithm.” Statistics & Probability Letters.
Zellner. 1988. Optimal Information Processing and Bayes’s Theorem.” The American Statistician.
———. 2002. Information Processing and Bayesian Analysis.” Journal of Econometrics, Information and Entropy Econometrics,.
Zhang, Tong. 1999. Theoretical Analysis of a Class of Randomized Regularization Methods.” In Proceedings of the Twelfth Annual Conference on Computational Learning Theory. COLT ’99.
Zhang, Xinhua. 2013. Bregman Divergence and Mirror Descent.”
Zhang, Yao, Saxe, Advani, et al. 2018. Energy-Entropy Competition and the Effectiveness of Stochastic Gradient Descent in Machine Learning.” Molecular Physics.
Zhang, Guodong, Sun, Duvenaud, et al. 2018. Noisy Natural Gradient as Variational Inference.” In Proceedings of the 35th International Conference on Machine Learning.