Randomly exploring the posterior space.
Combining Markov Chain Monte Carlo and Stochastic Gradient Descent for in the sense of using SGD to do some cheap approximation to MCMC posterior sampling. Overviews in Ma, Chen, and Fox (2015) and Mandt, Hoffman, and Blei (2017). A lot of probabilistic neural nets leverage this idea.
A related idea is estimating gradients of parameters by Monte Carlo; there is nothing necessarily Bayesian about that per se; in that case we are doing a noisy estimate of a deterministic quantity. In this setting we are interested in the noise itself.
I have a vague memory that this argument is leveraged in Neal (1996)? Should check. For sure the version in Mandt, Hoffman, and Blei (2017) is a highly developed and modern take. Basically, they analyse the distribution near convergence as an autoregressive process:
Stochastic Gradient Descent with a constant learning rate (constant SGD) simulates a Markov chain with a stationary distribution. With this perspective, we derive several new results.
- We show that constant SGD can be used as an approximate Bayesian posterior inference algorithm. Specifically, we show how to adjust the tuning parameters of constant SGD to best match the stationary distribution to a posterior, minimizing the Kullback-Leibler divergence between these two distributions.
- We demonstrate that constant SGD gives rise to a new variational EM algorithm that optimizes hyperparameters in complex probabilistic models.
- We also propose SGD with momentum for sampling and show how to adjust the damping coefficient accordingly.
- We analyze MCMC algorithms. For Langevin Dynamics and Stochastic Gradient Fisher Scoring, we quantify the approximation errors due to finite learning rates. Finally,
- we use the stochastic process perspective to give a short proof of why Polyak averaging is optimal. Based on this idea, we propose a scalable approximate MCMC algorithm, the Averaged Stochastic Gradient Sampler.
The article is rather beautiful. Importantly they leverage the assumption that we are sampling from approximately (log-)quadratic posterior modes, which means that we should be suspicious of the method when
- The posterior is not quadratic, i.e. the distribution is not well approximated by a Gaussian at the mode, and
- The same for the tails. If there are low-probability but high importance posterior configurations such that they are not Gaussian in the tails, we should be skeptical that they will be sampled well; I have an intuition that this is a more stringent requirement, but TBH I am not sure of the exact relationship of these two conditions.
The models leverage gradient flow, which is a continuous limit of stochastic gradient descent.
Stochastic Weight Averaging
A popular recent development is the Stochastic Weight Averaging family of methods (Izmailov et al. 2018, 2020; Maddox et al. 2019; Wilson and Izmailov 2020). See Andrew G Wilsonβs web page for a brief description of the sub methods, since he seems to have been involved in all of them.
Stochastic Gradient Langevin MCMC
βa Markov Chain reminiscent of noisy gradient descentβ (Welling and Teh 2011) extending vanilla Langevin dynamics.
Stein Variational GD
Not quite what I mean, but related. An ensemble method. See Stein VGD.
SG Hamiltonian Monte Carlo
SG thermostats
- Some kind of variance control using auxiliary variables?
SG Fisher scoring
Ahn, Korattikara, and Welling (2012)
No comments yet. Why not leave one?