Tuning an MCMC sampler


The process of adapting to the target optimally

Designing MCMC transition density, possibly via the proposal density in rejection sampling, by optimisation for optimal mixing.

The simplest way to do this is to do a “pilot” run to estimate optimal mixing kernels then use the adapted mixing kernels, discarding the suspect samples from the pilot run as suspect. This wastes some effort but is theoretically simple. Alternatively you could do this dynamically, online, which is called Adaptive MCMC. There are then some theoretical wrinkles.

I do wish to maximise the mixing rate by some criterion. If I already know my mixing rate is bad without optimising, which is why I am optimising, how do I get the simulations against which to conduct the optimisation? How do we optimise simultaneously for maximising mixing rate and minimising rejection rate? Fearnhead and Taylor (2013) summarise some options here for an objective function. One that seems sufficient for publication of typical MCMC papers is Expected Squared Jump Distance, ESJD (which is more precisely an expected squared Mahalanobis distance) between samples, which minimises the lag-1 autocorrelation, which is in practice most of what we do.

Proposal density

Designing the proposal density is often easy for an independent rejection sampler. That is precisely the cross-entropy method. For Markov chain, though the success criterion is muddier. AFAICT the cross entropy trick does not apply for non-i.i.d. samples.

Transition density

🏗

Adaptive SMC

In Sequential Monte Carlo, which is not MCMC, we do not need to be so sensitive to changing the proposal parameters, since there is no stationary distribution argument. See Fearnhead and Taylor (2013).

Variational inference

What is Hamiltonian Variational Inference? Does that fit here? 🏗 (Caterini, Doucet, and Sejdinovic 2018; Salimans, Kingma, and Welling 2015)

Caterini, Anthony L., Arnaud Doucet, and Dino Sejdinovic. 2018. “Hamiltonian Variational Auto-Encoder.” In Advances in Neural Information Processing Systems. http://arxiv.org/abs/1805.11328.

Fearnhead, Paul, and Benjamin M. Taylor. 2013. “An Adaptive Sequential Monte Carlo Sampler.” Bayesian Analysis 8 (2): 411–38. https://doi.org/10.1214/13-BA814.

Mathew, B, A M Bauer, P Koistinen, T C Reetz, J Léon, and M J Sillanpää. 2012. “Bayesian Adaptive Markov Chain Monte Carlo Estimation of Genetic Parameters.” Heredity 109 (4): 235–45. https://doi.org/10.1038/hdy.2012.35.

Roberts, Gareth O., and Jeffrey S. Rosenthal. 2014. “Minimising MCMC Variance via Diffusion Limits, with an Application to Simulated Tempering.” Annals of Applied Probability 24 (1): 131–49. https://doi.org/10.1214/12-AAP918.

———. 2009. “Examples of Adaptive MCMC.” Journal of Computational and Graphical Statistics 18 (2): 349–67. https://doi.org/10.1198/jcgs.2009.06134.

Salimans, Tim, Diederik Kingma, and Max Welling. 2015. “Markov Chain Monte Carlo and Variational Inference: Bridging the Gap.” In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), 1218–26. ICML’15. Lille, France: JMLR.org. http://proceedings.mlr.press/v37/salimans15.html.

Schuster, Ingmar, Heiko Strathmann, Brooks Paige, and Dino Sejdinovic. 2017. “Kernel Sequential Monte Carlo.” In ECML-PKDD 2017. http://arxiv.org/abs/1510.03105.