In adaptive MCMC, the trajectories of the simulator is perturbed by external forces (bottom right, centre) to change how they approaches the target (top right)
Designing MCMC transition density by online optimisation for optimal mixing. Also called controlled MCMC.
Here we are no longer truly using a Markov chain because the transition parameters depend upon the entire history of the chain (for example because you are dynamically updating the transition parameters to improve mixing etc). Tutorials: AtchadΓ© et al. (2011) and Andrieu and Thoms (2008).
With a Markov chain it is more complicated; If we perturb the transition density infinitely often we do not know in general that we will still converge to the target stationary distribution. However, we could do a βpilotβ run to estimate optimal mixing kernels then use the adapted mixing kernels, discarding the samples from the pilot run as suspect and using the ones that remained. This is then a tuned MCMC rather than an adaptive MCMC.
Here I will keep notes, if any on the perturbation problem. How do we guarantee that the proposal density is not changing too much by some criterion? Solutions to this seem to be sampler-specific.
No comments yet. Why not leave one?