Adaptive Markov Chain Monte Carlo samplers

In adaptive MCMC, the trajectories of the simulator is perturbed by external forces (bottom right, centre) to change how they approaches the target (top right)

Designing MCMC transition density by online optimisation for optimal mixing. Also called controlled MCMC.

Here we are no longer truly using a Markov chain because the transition parameters depend upon the entire history of the chain (for example because you are dynamically updating the transition parameters to improve mixing etc). Tutorials: Atchadé et al. (2011) and Andrieu and Thoms (2008).

With a Markov chain it is more complicated; If we perturb the transition density infinitely often we do not know in general that we will still converge to the target stationary distribution. However, we could do a “pilot” run to estimate optimal mixing kernels then use the adapted mixing kernels, discarding the samples from the pilot run as suspect and using the ones that remained. This is then a tuned MCMC rather than an adaptive MCMC.

Here I will keep notes, if any on the perturbation problem. How do we guarantee that the proposal density is not changing too much by some criterion? Solutions to this seem to be sampler-specific.

Andrieu, Christophe, and Christian P. Robert. 2001. “Controlled MCMC for Optimal Sampling.” Working Paper 2001-33. Center for Research in Economics and Statistics. https://econpapers.repec.org/paper/crswpaper/2001-33.htm.

Andrieu, Christophe, and Johannes Thoms. 2008. “A Tutorial on Adaptive MCMC.” Statistics and Computing 18 (4): 343–73. https://doi.org/10.1007/s11222-008-9110-y.

Atchadé, Yves, Gersende Fort, Eric Moulines, and Pierre Priouret. 2011. “Adaptive Markov Chain Monte Carlo: Theory and Methods.” In Bayesian Time Series Models, edited by David Barber, A. Taylan Cemgil, and Silvia Chiappa, 32–51. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9780511984679.003.

Gilks, Walter R., Gareth O. Roberts, and Sujit K. Sahu. 1998. “Adaptive Markov Chain Monte Carlo Through Regeneration.” Journal of the American Statistical Association 93 (443): 1045–54. https://doi.org/10.2307/2669848.

Griffin, Jim, Krys Latuszynski, and Mark Steel. 2019. “In Search of Lost (Mixing) Time: Adaptive Markov Chain Monte Carlo Schemes for Bayesian Variable Selection with Very Large P,” May. http://arxiv.org/abs/1708.05678.

Maire, Florian, Nial Friel, Antonietta Mira, and Adrian E. Raftery. 2019. “Adaptive Incremental Mixture Markov Chain Monte Carlo.” Journal of Computational and Graphical Statistics 28 (4): 790–805. https://doi.org/10.1080/10618600.2019.1598872.

Mathew, B, A M Bauer, P Koistinen, T C Reetz, J Léon, and M J Sillanpää. 2012. “Bayesian Adaptive Markov Chain Monte Carlo Estimation of Genetic Parameters.” Heredity 109 (4): 235–45. https://doi.org/10.1038/hdy.2012.35.

Roberts, Gareth O., and Jeffrey S. Rosenthal. 2001. “Optimal Scaling for Various Metropolis-Hastings Algorithms.” Statistical Science 16 (4): 351–67. https://doi.org/10.1214/ss/1015346320.

———. 2009. “Examples of Adaptive MCMC.” Journal of Computational and Graphical Statistics 18 (2): 349–67. https://doi.org/10.1198/jcgs.2009.06134.

Rosenthal, Jeffrey. 2011. “Optimal Proposal Distributions and Adaptive MCMC.” In Handbook of Markov Chain Monte Carlo, edited by Steve Brooks, Andrew Gelman, Galin Jones, and Xiao-Li Meng. Vol. 20116022. Chapman & Hall/CRC Handbooks of Modern Statistical Methods. Chapman and Hall/CRC. https://doi.org/10.1201/b10905-5.

Sejdinovic, Dino, Heiko Strathmann, Maria Lomeli Garcia, Christophe Andrieu, and Arthur Gretton. 2014. “Kernel Adaptive Metropolis-Hastings.” In International Conference on Machine Learning, 1665–73. Beijing, China: JMLR.org. http://arxiv.org/abs/1307.5302.

Strathmann, Heiko, Dino Sejdinovic, Samuel Livingstone, Zoltan Szabo, and Arthur Gretton. 2015. “Gradient-Free Hamiltonian Monte Carlo with Efficient Kernel Exponential Families.” In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, 955–63. NIPS’15. Montreal, Canada: MIT Press. http://arxiv.org/abs/1506.02564.