Bias reduction

Estimating the bias of an estimator so as to subtract it off again

Trying to reduce bias in point estimators by, e.g. bootstrap. In, e.g. AIC we try to compensate for bias in the model selection. In bias reduction we try to eliminate it from our estimates.

This looks interesting: Kosmidis and Lunardon (2020)

The current work develops a novel method for the reduction of the asymptotic bias of M-estimators from general, unbiased estimating functions. We call the new estimation method reduced-bias M -estimation, or RBM -estimation in short. Like the adjusted scores approach in Firth (1993), the new method relies on additive adjustments to the unbiased estimating functions that are bounded in probability, and results in estimators with bias of lower asymptotic order than the original M -estimators. The key difference is that the empirical adjustments introduced here depend only on the first two derivatives of the contributions to the estimating functions, and they require neither the computation of cumbersome expectations nor the potentially expensive, calculation of M -estimates from simulated samples. Specifically, …, RBM -estimation

  1. applies to models that are at least partially-specified;
  2. uses an analytical approximation to the bias function that relies only on derivatives of the contributions to the estimating functions;
  3. does not depend on the original estimator; and
  4. does not require the computation of any expectations.

Kosmidis’ comparison table

Cavanaugh, Joseph E. 1997. “Unifying the Derivations for the Akaike and Corrected Akaike Information Criteria.” Statistics & Probability Letters 33 (2): 201–8.

Chang, Jinyuan, and Peter Hall. 2015. “Double-Bootstrap Methods That Use a Single Double-Bootstrap Simulation.” Biometrika 102 (1): 203–14.

Firth, David. 1993. “Bias Reduction of Maximum Likelihood Estimates.” Biometrika 80 (1): 27–38.

Hall, Peter. 1994. “Methodology and Theory for the Bootstrap.” In Handbook of Econometrics, 4:2341–81. Elsevier.

Hall, Peter, Joel L. Horowitz, and Bing-Yi Jing. 1995. “On Blocking Rules for the Bootstrap with Dependent Data.” Biometrika 82 (3): 561–74.

Hesterberg, Tim. 2011. “Bootstrap.” Wiley Interdisciplinary Reviews: Computational Statistics 3 (6): 497–526.

Konishi, Sadanori, and Genshiro Kitagawa. 2003. “Asymptotic Theory for Information Criteria in Model Selection—Functional Approach.” Journal of Statistical Planning and Inference, C.R. Rao 80th Birthday Felicitation vol., Part IV, 114 (1–2): 45–61.

Kosmidis, Ioannis, and Nicola Lunardon. 2020. “Empirical Bias-Reducing Adjustments to Estimating Functions,” January.

Politis, Dimitris N., and Halbert White. 2004. “Automatic Block-Length Selection for the Dependent Bootstrap.” Econometric Reviews 23 (1): 53–70.

Shibata, Ritei. 1997. “Bootstrap Estimate of Kullback-Leibler Information for Model Selection.” Statistica Sinica 7: 375–94.

Stein, Charles M. 1981. “Estimation of the Mean of a Multivariate Normal Distribution.” The Annals of Statistics 9 (6): 1135–51.

Varin, Cristiano, Nancy Reid, and David Firth. 2011. “An Overview of Composite Likelihood Methods.” Statistica Sinica 21 (1): 5–42.

Ye, Jianming. 1998. “On Measuring and Correcting the Effects of Data Mining and Model Selection.” Journal of the American Statistical Association 93 (441): 120–31.

Zou, Hui, Trevor Hastie, and Robert Tibshirani. 2007. “On the ‘Degrees of Freedom’ of the Lasso.” The Annals of Statistics 35 (5): 2173–92.