Monte Carlo methods

Finding functionals (traditionally integrals) approximately by guessing cleverly. Often, but not always, used for approximate statistical inference, especially certain Bayesian techniques. Probably the most prominent use case is Bayesian statistics, where various Monte Carlo methods turn out to be effective for various inference problems. This is far from the only use however.

Markov chain samplers

See Markov Chain Monte Carlo.

Multi-level Monte Carlo

Hmmm. Also multi scale monte carlo, multi index monte carlo. :construction🏗️

Population Monte Carlo

Not sure. See Cappé et al. (2004).

Importance sampling methods can be iterated like MCMC algorithms, while being more robust against dependence and starting values. The population Monte Carlo principle consists of iterated generations of importance samples, with importance functions depending on the previously generated importance samples. The advantage over MCMC algorithms is that the scheme is unbiased at any iteration and can thus be stopped at any time, while iterations improve the performances of the importance function, thus leading to an adaptive importance sampling.

Sequential Monte Carlo

Filed under particle filters, and

Quasi Monte Carlo

Don’t even guess randomly, but sample cleverly using the shiny Quasi Monte Carlo.

Cross Entropy Method

For automatically adapting an importance sampling distribution. TBC.

Monte Carlo gradient estimation

See MC gradients


Anderson, David F., and Desmond J. Higham. 2012. Multilevel Monte Carlo for Continuous Time Markov Chains, with Applications in Biochemical Kinetics.” Multiscale Modeling & Simulation 10 (1): 146–79.
Andrieu, Christophe, and Johannes Thoms. 2008. A Tutorial on Adaptive MCMC.” Statistics and Computing 18 (4): 343–73.
Atchadé, Yves, Gersende Fort, Eric Moulines, and Pierre Priouret. 2011. Adaptive Markov Chain Monte Carlo: Theory and Methods.” In Bayesian Time Series Models, edited by David Barber, A. Taylan Cemgil, and Silvia Chiappa, 32–51. Cambridge: Cambridge University Press.
Cappé, O, A Guillin, J. M Marin, and C. P Robert. 2004. Population Monte Carlo.” Journal of Computational and Graphical Statistics 13 (4): 907–29.
Casella, George, and Christian P. Robert. 1996. Rao-Blackwellisation of Sampling Schemes.” Biometrika 83 (1): 81–94.
Cranmer, Kyle, Johann Brehmer, and Gilles Louppe. 2020. The Frontier of Simulation-Based Inference.” Proceedings of the National Academy of Sciences, May.
Elvira, Víctor, and Emilie Chouzenoux. 2021. “Optimized Population Monte Carlo,” 13.
Geffner, Tomas, and Justin Domke. 2018. Using Large Ensembles of Control Variates for Variational Inference.” In Proceedings of the 32nd International Conference on Neural Information Processing Systems, 9982–92. NIPS’18. Red Hook, NY, USA: Curran Associates Inc.
Giles, Michael B. 2008. Multilevel Monte Carlo Path Simulation.” Operations Research 56 (3): 607–17.
Giles, Michael B., and Lukasz Szpruch. 2014. Antithetic Multilevel Monte Carlo Estimation for Multi-Dimensional SDEs Without Lévy Area Simulation.” The Annals of Applied Probability 24 (4): 1585–1620.
Giles, Mike, and Lukasz Szpruch. 2012. Multilevel Monte Carlo Methods for Applications in Finance.” arXiv:1212.1377 [q-Fin], December.
Gu, Shixiang, Zoubin Ghahramani, and Richard E Turner. 2015. Neural Adaptive Sequential Monte Carlo.” In Advances in Neural Information Processing Systems 28, edited by C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, 2629–37. Curran Associates, Inc.
Haji-Ali, Abdul-Lateef, Fabio Nobile, and Raúl Tempone. 2016. Multi-Index Monte Carlo: When Sparsity Meets Sampling.” Numerische Mathematik 132 (4): 767–806.
Higham, Desmond J. 2015. An Introduction to Multilevel Monte Carlo for Option Valuation.” arXiv:1505.00965 [Physics, q-Fin, Stat], May.
Liu, Jun S. 1996. Metropolized Independent Sampling with Comparisons to Rejection Sampling and Importance Sampling.” Statistics and Computing 6 (2): 113–19.
Mohamed, Shakir, Mihaela Rosca, Michael Figurnov, and Andriy Mnih. 2020. Monte Carlo Gradient Estimation in Machine Learning.” Journal of Machine Learning Research 21 (132): 1–62.
Robert, Christian P., and George Casella. 2004. Monte Carlo Statistical Methods. 2nd ed. Springer Texts in Statistics. New York: Springer.
Rubinstein, Reuven Y., and Dirk P. Kroese. 2016. Simulation and the Monte Carlo Method. 3 edition. Wiley series in probability and statistics. Hoboken, New Jersey: Wiley.
Rubinstein, Reuven Y, and Dirk P Kroese. 2004. The Cross-Entropy Method a Unified Approach to Combinatorial Optimization, Monte-Carlo Simulation and Machine Learning. New York, NY: Springer New York.
Rubinstein, Reuven Y., Ad Ridder, and Radislav Vaisman. 2014. Fast Sequential Monte Carlo Methods for Counting and Optimization. Wiley Series in Probability and Statistics. Hoboken, New Jersey: Wiley.
Vehtari, Aki, Daniel Simpson, Andrew Gelman, Yuling Yao, and Jonah Gabry. 2019. Pareto Smoothed Importance Sampling.” arXiv:1507.02646 [Stat], July.
Virrion, Benjamin. 2020. Deep Importance Sampling.” arXiv:2007.02692 [q-Fin], July.
Walder, Christian J., Paul Roussel, Richard Nock, Cheng Soon Ong, and Masashi Sugiyama. 2019. New Tricks for Estimating Gradients of Expectations.” arXiv:1901.11311 [Cs, Stat], June.
Xia, Yuan. 2011. Multilevel Monte Carlo Method for Jump-Diffusion SDEs.” arXiv:1106.4730 [q-Fin], June.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.