Bias reduction

Estimating the bias of an estimator so as to subtract it off again

February 26, 2020 — February 26, 2020

estimator distribution
nonparametric
probabilistic algorithms
statistics

Trying to reduce bias in point estimators by, e.g. bootstrap. In, e.g. AIC we try to compensate for bias in the model selection. In bias reduction we try to eliminate it from our estimates.

This looks interesting: Kosmidis and Lunardon (2020)

The current work develops a novel method for the reduction of the asymptotic bias of M-estimators from general, unbiased estimating functions. We call the new estimation method reduced-bias M -estimation, or RBM -estimation in short. Like the adjusted scores approach in Firth (1993), the new method relies on additive adjustments to the unbiased estimating functions that are bounded in probability, and results in estimators with bias of lower asymptotic order than the original M -estimators. The key difference is that the empirical adjustments introduced here depend only on the first two derivatives of the contributions to the estimating functions, and they require neither the computation of cumbersome expectations nor the potentially expensive, calculation of M -estimates from simulated samples. Specifically, …, RBM -estimation

  1. applies to models that are at least partially-specified;
  2. uses an analytical approximation to the bias function that relies only on derivatives of the contributions to the estimating functions;
  3. does not depend on the original estimator; and
  4. does not require the computation of any expectations.
Figure 1: Kosmidis’ comparison table

1 References

Cavanaugh. 1997. Unifying the Derivations for the Akaike and Corrected Akaike Information Criteria.” Statistics & Probability Letters.
Chang, and Hall. 2015. Double-Bootstrap Methods That Use a Single Double-Bootstrap Simulation.” Biometrika.
Firth. 1993. Bias Reduction of Maximum Likelihood Estimates.” Biometrika.
Hall. 1994. Methodology and Theory for the Bootstrap.” In Handbook of Econometrics.
Hall, Horowitz, and Jing. 1995. On Blocking Rules for the Bootstrap with Dependent Data.” Biometrika.
Hesterberg. 2011. Bootstrap.” Wiley Interdisciplinary Reviews: Computational Statistics.
Konishi, and Kitagawa. 2003. Asymptotic Theory for Information Criteria in Model Selection—Functional Approach.” Journal of Statistical Planning and Inference, C.R. Rao 80th Birthday Felicitation Volume, Part IV,.
Kosmidis, and Lunardon. 2020. Empirical Bias-Reducing Adjustments to Estimating Functions.” arXiv:2001.03786 [Math, Stat].
Politis, and White. 2004. Automatic Block-Length Selection for the Dependent Bootstrap.” Econometric Reviews.
Shibata. 1997. “Bootstrap Estimate of Kullback-Leibler Information for Model Selection.” Statistica Sinica.
Stein. 1981. Estimation of the Mean of a Multivariate Normal Distribution.” The Annals of Statistics.
Varin, Reid, and Firth. 2011. An Overview of Composite Likelihood Methods.” Statistica Sinica.
Ye. 1998. On Measuring and Correcting the Effects of Data Mining and Model Selection.” Journal of the American Statistical Association.
Zou, Hastie, and Tibshirani. 2007. On the ‘Degrees of Freedom’ of the Lasso.” The Annals of Statistics.