Resampling your own data to estimate how good your point estimator is and to reduce its bias. In general, it’s an intuitive technique. However, it gets tricky for, e.g., dependent data. For a handy crib sheet for bootstrap failure modes, see Thomas Lumley, When the bootstrap doesn’t work.
In the classical mode, this is a frequentist technique without an immediate Bayesian interpretation.
Commonly credited to B. Efron (1979) and theoretically justified by Gine and Zinn (1990).
Bootstrap bias correction
As opposed to variance estimation. NBD; Bootstrap is notionally telling you the sampling distribution. 🏗
Bootstrap for dependent data
e.g., as presaged, time series. Parametric bootstrap would be the logical default choice, right? When does that work?
As a Bayesian method
There is absolutely a Bayesian bootstrap if you think hard enough about it, it turns out. Several, really. Rubin (1981) derived a Bayesian version. See Lyddon, Holmes, and Walker (2019) for a modern update, and Rasmus Bååth for a diagrammed explanation of the points of contact with frequentist bootstrap and some other things.
References
Barber, Candès, Ramdas, et al. 2021.
“Predictive Inference with the Jackknife+.” The Annals of Statistics.
Bühlmann. 2002.
“Bootstraps for Time Series.” Statistical Science.
Bühlmann, and Künsch. 1999.
“Block Length Selection in the Bootstrap for Time Series.” Computational Statistics & Data Analysis.
Burnham, and Anderson. 2004.
“Multimodel Inference Understanding AIC and BIC in Model Selection.” Sociological Methods & Research.
Chen, and Lo. 1997.
“On a Mapping Approach to Investigating the Bootstrap Accuracy.” Probability Theory and Related Fields.
Dahlhaus. 2011.
“Discussion: Bootstrap Methods for Dependent Data: A Review.” Journal of the Korean Statistical Society.
DiCiccio, and Efron. 1996a.
“[Bootstrap Confidence Intervals]: Rejoinder.” Statistical Science.
———. 1996b.
“Bootstrap Confidence Intervals.” Statistical Science.
———. 2012.
“Bayesian Inference and the Parametric Bootstrap.” The Annals of Applied Statistics.
Gine, and Zinn. 1990.
“Bootstrapping General Empirical Measures.” Annals of Probability.
Giordano, Jordan, and Broderick. 2019.
“A Higher-Order Swiss Army Infinitesimal Jackknife.” arXiv:1907.12116 [Cs, Math, Stat].
Gonçalves, and Politis. 2011.
“Discussion: Bootstrap Methods for Dependent Data: A Review.” Journal of the Korean Statistical Society.
Green, and Shalizi. 2017.
“Bootstrapping Exchangeable Random Graphs.” arXiv:1711.00813 [Stat].
———. 1994.
“Methodology and Theory for the Bootstrap.” In
Handbook of Econometrics.
Härdle, Horowitz, and Kreiss. 2003.
“Bootstrap Methods for Time Series.” International Statistical Review.
Hesterberg. 2011.
“Bootstrap.” Wiley Interdisciplinary Reviews: Computational Statistics.
Imbens, and Menzel. 2021.
“A Causal Bootstrap.” The Annals of Statistics.
———. 2003. Resampling Methods for Dependent Data.
Lee, and Young. 1996.
“[Bootstrap Confidence Intervals]: Comment.” Statistical Science.
Papadopoulos, Edwards, and Murray. 2001.
“Confidence Estimation Methods for Neural Networks: A Practical Comparison.” IEEE Transactions on Neural Networks.
Paparoditis, and Sapatinas. 2014.
“Bootstrap-Based Testing for Functional Data.” arXiv:1409.4317 [Math, Stat].
Politis, and Romano. 1994.
“The Stationary Bootstrap.” Journal of the American Statistical Association.
Rodriguez, and Ruiz. 2009.
“Bootstrap Prediction Intervals in State–Space Models.” Journal of Time Series Analysis.
Rubin. 1981.
“The Bayesian Bootstrap.” Annals of Statistics.
Shalizi. 2010.
“The Bootstrap.” American Scientist.
Shao. 1996.
“Bootstrap Model Selection.” Journal of the American Statistical Association.
Shibata. 1997. “Bootstrap Estimate of Kullback-Leibler Information for Model Selection.” Statistica Sinica.
Yatchew, and Hardle. 2006. “Nonparametric State Price Density Estimation Using Constrained Least Squares and the Bootstrap.” Journal of Econometrics.