Log concave distributions

associated tools


Langevin MCMC

a Markov Chain reminiscent of noisy gradient descent”. Holden Lee, Andrej Risteski introduce this the connection between log-concavity and convex optimisation.

\[ x_{t+\eta} = x_t - \eta \nabla f(x_t) + \sqrt{2\eta}\xi_t,\quad \xi_t\sim N(0,I). \]

Rob Salomone explains this well; see Hodgkinson, Salomone, and Roosta (2019).

Andrej Risteski’s Beyond log-concave sampling series is a also a good introduction to log-concave sampling.

References

Bagnoli, Mark, and Ted Bergstrom. 1989. “Log-Concave Probability and Its Applications,” 17.
Brosse, Nicolas, Alain Durmus, and Eric Moulines. n.d. “The Promises and Pitfalls of Stochastic Gradient Langevin Dynamics,” 11.
Castellani, Tommaso, and Andrea Cavagna. 2005. “Spin-Glass Theory for Pedestrians.” Journal of Statistical Mechanics: Theory and Experiment 2005 (05): P05012. https://doi.org/10.1088/1742-5468/2005/05/P05012.
Domke, Justin. 2017. “A Divergence Bound for Hybrids of MCMC and Variational Inference and an Application to Langevin Dynamics and SGVI.” In PMLR, 1029–38. http://proceedings.mlr.press/v70/domke17a.html.
Duane, Simon, A. D. Kennedy, Brian J. Pendleton, and Duncan Roweth. 1987. “Hybrid Monte Carlo.” Physics Letters B 195 (2): 216–22. https://doi.org/10.1016/0370-2693(87)91197-X.
Durmus, Alain, and Eric Moulines. 2016. “High-Dimensional Bayesian Inference via the Unadjusted Langevin Algorithm.” May 5, 2016. http://arxiv.org/abs/1605.01559.
Garbuno-Inigo, Alfredo, Franca Hoffmann, Wuchen Li, and Andrew M. Stuart. 2020. “Interacting Langevin Diffusions: Gradient Structure and Ensemble Kalman Sampler.” SIAM Journal on Applied Dynamical Systems 19 (1): 412–41. https://doi.org/10.1137/19M1251655.
Ge, Rong, Holden Lee, and Andrej Risteski. 2020. “Simulated Tempering Langevin Monte Carlo II: An Improved Proof Using Soft Markov Chain Decomposition.” September 9, 2020. http://arxiv.org/abs/1812.00793.
Girolami, Mark, and Ben Calderhead. 2011. “Riemann Manifold Langevin and Hamiltonian Monte Carlo Methods.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 73 (2): 123–214. https://doi.org/10.1111/j.1467-9868.2010.00765.x.
Hodgkinson, Liam, Robert Salomone, and Fred Roosta. 2019. “Implicit Langevin Algorithms for Sampling From Log-Concave Densities.” March 28, 2019. http://arxiv.org/abs/1903.12322.
Mandt, Stephan, Matthew D. Hoffman, and David M. Blei. 2017. “Stochastic Gradient Descent as Approximate Bayesian Inference.” JMLR, April. http://arxiv.org/abs/1704.04289.
Mangoubi, Oren, and Aaron Smith. 2017. “Rapid Mixing of Hamiltonian Monte Carlo on Strongly Log-Concave Distributions.” August 23, 2017. http://arxiv.org/abs/1708.07114.
Norton, Richard A., and Colin Fox. 2016. “Tuning of MCMC with Langevin, Hamiltonian, and Other Stochastic Autoregressive Proposals.” October 3, 2016. http://arxiv.org/abs/1610.00781.
Welling, Max, and Yee Whye Teh. n.d. “Bayesian Learning via Stochastic Gradient Langevin Dynamics,” 8.
Xifara, T., C. Sherlock, S. Livingstone, S. Byrne, and M. Girolami. 2014. “Langevin Diffusions and the Metropolis-Adjusted Langevin Algorithm.” Statistics & Probability Letters 91 (August): 14–19. https://doi.org/10.1016/j.spl.2014.04.002.

Warning! Experimental comments system! If is does not work for you, let me know via the contact form.

No comments yet!

GitHub-flavored Markdown & a sane subset of HTML is supported.