Designing MCMC transition density, possibly via the proposal density in rejection sampling, by optimisation for optimal mixing.

The simplest way to do this is to do a βpilotβ run to estimate optimal mixing kernels then use the adapted mixing kernels, discarding the suspect samples from the pilot run as suspect. This wastes some effort but is theoretically simple. Alternatively you could do this dynamically, online, which is called Adaptive MCMC. There are then some theoretical wrinkles.

I do wish to maximise the mixing rate by some criterion.
If I already know my mixing rate is bad without optimising, which is why I am optimising, how do I get the simulations against which to conduct the optimisation?
How do we optimise simultaneously for maximising mixing rate and minimising rejection rate?
Fearnhead and Taylor (2013) summarise some options here for an objective function.
One that seems sufficient for publication of typical MCMC papers is Expected Squared Jump Distance, *ESJD* (which is more precisely an expected squared Mahalanobis distance) between samples, which minimises the lag-1 autocorrelation, which is in practice most of what we do.

## Proposal density

Designing the proposal density is often easy for an *independent* rejection sampler.
That is precisely the cross-entropy method.
For Markov chain, though the success criterion is muddier.
AFAICT the cross entropy trick does not apply for non-i.i.d. samples.

## Transition density

π

## Adaptive SMC

In Sequential Monte Carlo, which is not MCMC, we do not need to be so sensitive to changing the proposal parameters, since there is no stationary distribution argument. See Fearnhead and Taylor (2013).

## Variational inference

What is *Hamiltonian Variational Inference*? Does that fit under this heading?
π
(Caterini, Doucet, and Sejdinovic 2018; Salimans, Kingma, and Welling 2015)

## References

*Advances in Neural Information Processing Systems*.

*Bayesian Analysis*8 (2): 411β38.

*Heredity*109 (4): 235β45.

*arXiv:1610.00781 [Math, Stat]*, October.

*Journal of Computational and Graphical Statistics*18 (2): 349β67.

*Annals of Applied Probability*24 (1): 131β49.

*Proceedings of the 32nd International Conference on Machine Learning (ICML-15)*, 1218β26. ICMLβ15. Lille, France: JMLR.org.

*ECML-PKDD 2017*.

## No comments yet. Why not leave one?