Model complexity penalties

Information criteria, degrees of freedom etc

Understanding the “Degrees of freedom” of a model. Estimating that trace penalty matrix.

🏗 Explain AIC, \(C_p\), SURE, and BIC-type degrees of freedom, and whatever variants there are out there.

As seen in robust estimation and AIC/BIC. Complexity penalties crop up in model selection. (i.e. choosing the complexity of model appropriate to your data.)

Efron (Efro04) is an excellent introduction, compressing 30 years of theory into 2 pages. Massart (Mass00) seems more modern in flavour to me:

The reader who is not familiar with model selection via [complexity] penalization can legitimately ask the question: where does the idea of penalization come from? It is possible to answer this question at two different levels:

  • at some intuitive level by presenting the heuristics of one of the first criterion of this kind which has been introduced by Akaike (1973);

  • at some technical level by explaining why such a strategy of model selection has some chances to succeed.

Yuan and Lin (YuLi06) are an example of the kind of argumentation I need to use to use linear model approximation for general application of DOF in sparse model selection.

Zou et al (ZoHT07):

Degrees of freedom is a familiar phrase for many statisticians. In linear regression the degrees of freedom is the number of estimated predictors. Degrees of freedom is often used to quantify the model complexity of a statistical modeling procedure (Hastie and Tibshirani HaTi90). However, generally speaking, there is no exact correspondence between the degrees of freedom and the number of parameters in the model (Ye, Ye98). Stein’s unbiased risk estimation (SURE) theory (Stei81) gives a rigorous definition of the degrees of freedom for any fitting procedure. Efron (Efro04) showed that \(C_p\) is an unbiased estimator of the true prediction error, and in some settings it offers substantially better accuracy than cross-validation and related nonparametric methods. Thus degrees of freedom plays an important role in model assessment and selection. Donoho and Johnstone (DoJo95) used the SURE theory to derive the degrees of freedom of soft thresholding and showed that it leads to an adaptive wavelet shrinkage procedure called SureShrink. Ye (Ye98) and Shen and Ye (ShYe02) showed that the degrees of freedom can capture the inherent uncertainty in modeling and frequentist model selection. Shen and Ye (ShYe02) and Shen, Huang and Ye (ShHY04) further proved that the degrees of freedom provides an adaptive model selection criterion that performs better than the fixed-penalty model selection criteria.

Information Criteria

Akaike and friends. With M-estimation, (e.g. maximum likelihood estimation and robust estimation) these are marvelous and general shortcuts to do model selection. (i.e. choosing the complexity of model appropriate to your data) without resorting to computionally expensive cross validation.

For all of these, a thing called the number of effective degrees of freedom is important. There are several different definitions for that, and they only sometimes coincide, so I leave that for a different notebook.

Information criteria can effectively then e.g. do the same thing cross-validation at a small fraction of the computational cost. In fact, they are asymptotically the same - see below.

Estimated cross entropy (KL-divergence between model and… what?) seems to be the flavour of the minute in machine learning. (Because cross validation is unbearably slow?) Here is Chrosopher Olah’s excellent visual explanation of it These do relate, no? Actually I haven’t read this.

To learn:

  • How this interacts with robust estimators

  • How to use AIC with nonparametric or high dimensional methods (GIC)

  • How it relates to minimum description length (e.g. BHLL08)

Influential current English-language texts in this area are BuAn02, ClHj08 and KoKi08. The first of these is highly cited and brought the AIC method into the mainstream in the West from where it had been on the specalised fringes. The latter two focus on extensions such as TIC and GIC.

🏗 general description.

🏗 clarify relationship to Minimum Description Length, Rissanen-style.

ZhLT10:

In the literature, selection criteria are usually classified into two categories: consistent (e.g., the Bayesian information criterion BIC, Schwarz, 1978) and efficient (e.g., the Akaike information criterion AIC, Akaike, 1974; the generalized cross-validation GCV, Craven and Wahba, 1979). A consistent criterion identifies the true model with a probability that approaches 1 in large samples when a set of candidate models contains the true model. An efficient criterion selects the model so that its average squared error is asymptotically equivalent to the minimum offered by the candidate models when the true model is approximated by a family of candidate models. Detailed discussions on efficiency and consistency can be found in Shibata (1981, 1984), Li (1987), Shao (1997) and McQuarrie and Tsai (1998).

Non-KL degrees of freedom

… is what I actually am interested in atm.

Efficient: AIC etc

Akaike Information Criterion (AIC)

The classic.

Takeuchi Information Criterion (TIC)

Apparently this one was influential in Japan, but untranslated into English, so only belately common in the west. Good explanations are in ClHj08 and KoKi08. Relaxes the assumption that the model is Fisher efficient (i.e. that the true generating process is included in your model, and with enough data you’d discover that.)

Konishi and Kitegawa’s Generalised Information Criterion (GIC)

Taking information criteria to general (e.g. robust, penalised) M-estimation instead of purely ML estimation; also relaxing the assumption that we even have the “true” model in our class. (KoKi96); C&C BuNo95, probably others. In paricular, you are no longer trying to fit the midel by minimising least-squares errors, for example. ClHj08 mention the “Robustified Information Criterion” in passing, which may relate?

🏗 Explain my laborious reasoning that generalised Akaike information criteria for penalised regression don’t seem work when the penalty term is not differentiable, (cross validation works fine though, abd possibly also BIC) and the issues that therefore arise in model selection for such models in the sparse case.

Focussed information criterion (FIC)

Claeskens and Hjort define this (ClHj08, chapter 6):

The model selection methods presented earlier (such as AIC and the BIC) have one thing in common: they select one single ‘best model’, which should then be used to explain all aspects of the mechanisms underlying the data and predict all future data points. The tolerance discussion in chapter 5 showed that sometimes one model is best for estimating one type of estimand, whereas another model is best for another estimand. The point of view expressed via the [FIC] is that a ‘best model’ should depend on the parameter under focus, such as the mean, or the variance, or the particular covariate values etc Thus the FIC allows and encourages different models to be selected for different parameters of interest.

This sounds very logical; of course, then one must do more work to make it go.

Network information criterion

MuYA94: “an estimator of the expected loss of a loss function \(\ell(\theta)+\lambda H(\theta)\) where \(\H(\theta)\) is a regularisation term”.

Regularization information criterion

Shib89 - is this distinct from GIC?

Bootstrap information criterion

A compromise between the computational cheapness of information criteria and the practical simplicity of cross-validation.

KoKi08 ch 8. See ClHj08 6.3 for a bootstrap-FIC.

Consistency of model order selected - AIC

Akaike Information criteria are not asymptotically consistent (see KoKi08.) in the sense that if there is a true model, you do not asymptotically select in the large sample limit with P=1. However, the distribution of model orders does not get worse as n increases. BuAn02 6.3 and KoKi08 3.5.2 discuss this; In a sense it would be surprising if it did do especially well in selecting model order; since our criteria is rather designed to minimise prediction error, not model selection error. Model order is more or less a nuisance parameter in this framework.

TBC.

Cross-validation equivalence

KoKi08, 10.1.4 discuss the asymptotic equivalence of AIC/TIC/GIC and cross validation under various circumstances, attributing the equivalence results to Ston77 and Shib89. ClHj08 proves a similar result.

Automatic GIC

🏗; I know that KoKi96 give formulae for loss functions for ANY M-estimation and penalisation procedure, but in general the degrees of freedom matrix trace calculation is nasty, and only in-principle estimable from the data, requiring a matrix product of the hessian at every data point. This is not necessarily computationally tractable - I know of formulae only for GLMs and robust regression with \(\ell_2\) penalties. Can we get such penalties for more general ML fits?

GIC & the LASSO

I thought this didn’t work because we needed the second derivative of the penalty; but see ZhLT10.

Information criteria at scale

Big-data information criteria. AIC is already computationally cheaper than cross validation. What about when my data is so large that I would like to select my mode before looking at all of it with such-and-such a guarantee of goodness? Can I do AIC at scale? if I am fitting a model using SGD, can I estimate my model order using partial data? How? I’m interested in doing this in a way that preserves the property of being computationally cheaper than cross-validating.

Here’s an example… Bondell et al (BoKG10):

In order to avoid complete enumeration of all possible \(2^{p+q}\) models, Wolfinger (1993) and Diggle, Liang and Zeger (1994) recommended the Restricted Information Criterion (denoted by REML.IC), in that, by using the most complex mean structure, selection is first performed on the variance-covariance structure by computing the AIC and/or BIC. Given the best covariance structure, selection is then performed on the fixed effects. Alternatively, Pu and Niu (2006) proposed the EGIC (Extended GIC), where using the BIC, selection is first performed on the fixed effects by including all of the random effects into the model. Once the fixed effect structure is chosen, selection is then performed on the random effects.

In general I’d like to avoid enumerating the models as much as possible and simply select relevant predictors with high probability, compresssive-sensing style.

Consistent: Bayesian Information Criteria

a.k.a. Schwarz Information Criterion. Also co-invented by the unstoppable Akaike. (Schw78, Akai78)

This is a different family to the original AIC. This has a justification in terms of MDL and of Bayes risk? Different regularity conditions, something something…

How would this work with regularisation? Apparently Mach93 gives some extensions to introduce robust fitting extensions much as the GIC introduces such to the AIC. ClHj08 give an easy summary and more general settings.

Consistent and/or efficient: Nishii’s Generalised Information Criterion

Nish84, commended by ZhLT10 as a unifying formalism for these efficient/consistent others, which includes efficient and consistent-type information penalties as special cases I don’t know much about this.

Akaike, Hirotogu. 1973. “Information Theory and an Extension of the Maximum Likelihood Principle.” In Proceeding of the Second International Symposium on Information Theory, edited by Petrovand F Caski, 199–213. Budapest: Akademiai Kiado. http://link.springer.com/chapter/10.1007/978-1-4612-1694-0_15.

Akaike, Hirotugu. 1981. “Likelihood of a Model and Information Criteria.” Journal of Econometrics 16 (1): 3–14. https://doi.org/10.1016/0304-4076(81)90071-3.

———. 1978. “A New Look at the Bayes Procedure.” Biometrika 65 (1): 53–59. https://doi.org/10.1093/biomet/65.1.53.

Akaike, Htrotugu. 1973. “Maximum Likelihood Identification of Gaussian Autoregressive Moving Average Models.” Biometrika 60 (2): 255–65. https://doi.org/10.1093/biomet/60.2.255.

Ando, Tomohiro, Sadanori Konishi, and Seiya Imoto. 2008. “Nonlinear Regression Modeling via Regularized Radial Basis Function Networks.” Journal of Statistical Planning and Inference, Special Issue in Honor of Junjiro Ogawa (1915 - 2000): Design of Experiments, Multivariate Analysis and Statistical Inference, 138 (11): 3616–33. https://doi.org/10.1016/j.jspi.2005.07.014.

Barron, Andrew R. 1986. “Entropy and the Central Limit Theorem.” The Annals of Probability 14 (1): –336–42. https://doi.org/10.1214/aop/1176992632.

Barron, Andrew R., Cong Huang, Jonathan Q. Li, and Xi Luo. 2008. “MDL, Penalized Likelihood, and Statistical Risk.” In Information Theory Workshop, 2008. ITW’08. IEEE, 247–57. IEEE. https://doi.org/10.1109/ITW.2008.4578660.

Barron, A., J. Rissanen, and Bin Yu. 1998. “The Minimum Description Length Principle in Coding and Modeling.” IEEE Transactions on Information Theory 44 (6): 2743–60. https://doi.org/10.1109/18.720554.

Bashtannyk, David M., and Rob J. Hyndman. 2001. “Bandwidth Selection for Kernel Conditional Density Estimation.” Computational Statistics & Data Analysis 36 (3): 279–98. https://doi.org/10.1016/S0167-9473(00)00046-3.

Bickel, Peter J., Bo Li, Alexandre B. Tsybakov, Sara A. van de Geer, Bin Yu, Teófilo Valdés, Carlos Rivero, Jianqing Fan, and Aad van der Vaart. 2006. “Regularization in Statistics.” Test 15 (2): 271–344. https://doi.org/10.1007/BF02607055.

Birgé, Lucien, and Pascal Massart. 2006. “Minimal Penalties for Gaussian Model Selection.” Probability Theory and Related Fields 138 (1-2): 33–73. https://doi.org/10.1007/s00440-006-0011-8.

Bondell, Howard D., Arun Krishna, and Sujit K. Ghosh. 2010. “Joint Variable Selection for Fixed and Random Effects in Linear Mixed-Effects Models.” Biometrics 66 (4): 1069–77. https://doi.org/10.1111/j.1541-0420.2010.01391.x.

Buckland, S. T., K. P. Burnham, and N. H. Augustin. 1997. “Model Selection: An Integral Part of Inference.” Biometrics 53 (2): 603–18. https://doi.org/10.2307/2533961.

Bunea, Florentina. 2004. “Consistent Covariate Selection and Post Model Selection Inference in Semiparametric Regression.” The Annals of Statistics 32 (3): 898–927. https://doi.org/10.1214/009053604000000247.

Burman, P., and D. Nolan. 1995. “A General Akaike-Type Criterion for Model Selection in Robust Regression.” Biometrika 82 (4): 877–86. https://doi.org/10.1093/biomet/82.4.877.

Burnham, Kenneth P., and David R. Anderson. 2004. “Multimodel Inference Understanding AIC and BIC in Model Selection.” Sociological Methods & Research 33 (2): 261–304. https://doi.org/10.1177/0049124104268644.

Burnham, Kenneth P., and David Raymond Anderson. 2002. Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach. 2nd ed. New York: Springer.

Cavanaugh, Joseph E. 1997. “Unifying the Derivations for the Akaike and Corrected Akaike Information Criteria.” Statistics & Probability Letters 33 (2): 201–8. https://doi.org/10.1016/S0167-7152(96)00128-9.

Cavanaugh, Joseph E., and Robert H. Shumway. 1998. “An Akaike Information Criterion for Model Selection in the Presence of Incomplete Data.” Journal of Statistical Planning and Inference 67 (1): 45–65. https://doi.org/10.1016/S0378-3758(97)00115-8.

Chen, Jiahua, and Zehua Chen. 2008. “Extended Bayesian Information Criteria for Model Selection with Large Model Spaces.” Biometrika 95 (3): 759–71. https://doi.org/10.1093/biomet/asn034.

Chichignoud, Michaël, Johannes Lederer, and Martin Wainwright. 2014. “A Practical Scheme and Fast Algorithm to Tune the Lasso with Optimality Guarantees,” October. http://arxiv.org/abs/1410.0247.

Claeskens, Gerda, and Nils Lid Hjort. 2008. Model Selection and Model Averaging. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge ; New York: Cambridge University Press.

Claeskens, Gerda, Tatyana Krivobokova, and Jean D. Opsomer. 2009. “Asymptotic Properties of Penalized Spline Estimators.” Biometrika 96 (3): 529–44. https://doi.org/10.1093/biomet/asp035.

Donoho, David L., and Iain M. Johnstone. 1995. “Adapting to Unknown Smoothness via Wavelet Shrinkage.” Journal of the American Statistical Association 90 (432): 1200–1224. https://doi.org/10.1080/01621459.1995.10476626.

Dossal, Charles, Maher Kachour, Jalal M. Fadili, Gabriel Peyré, and Christophe Chesneau. 2011. “The Degrees of Freedom of the Lasso for General Design Matrix,” November. http://arxiv.org/abs/1111.1162.

Efron, Bradley. 1986. “How Biased Is the Apparent Error Rate of a Prediction Rule?” Journal of the American Statistical Association 81 (394): 461–70. https://doi.org/10.1080/01621459.1986.10478291.

———. 2004. “The Estimation of Prediction Error.” Journal of the American Statistical Association 99 (467): 619–32. https://doi.org/10.1198/016214504000000692.

Fan, Jianqing, and Runze Li. 2001. “Variable Selection via Nonconcave Penalized Likelihood and Its Oracle Properties.” Journal of the American Statistical Association 96 (456): 1348–60. https://doi.org/10.1198/016214501753382273.

Hastie, Trevor J., and Robert J. Tibshirani. 1990. Generalized Additive Models. Vol. 43. CRC Press. https://books.google.com.au/books?hl=en&lr=&id=qa29r1Ze1coC&oi=fnd&pg=PR13&ots=j32OnmAYkL&sig=uIjcDemVVYQpa1hDj4ip8OK4gcE.

Hu, Feifang, and James V. Zidek. 2002. “The Weighted Likelihood.” The Canadian Journal of Statistics / La Revue Canadienne de Statistique 30 (3): 347–71. https://doi.org/10.2307/3316141.

Huang, Cong, G. L. H. Cheang, and Andrew R. Barron. 2008. “Risk of Penalized Least Squares, Greedy Selection and L1 Penalization for Flexible Function Libraries.” http://www.stat.yale.edu/~arb4/publications_files/RiskGreedySelectionAndL1penalization.pdf.

Huang, Jian, Shuange Ma, Huiliang Xie, and Cun-Hui Zhang. 2009. “A Group Bridge Approach for Variable Selection.” Biometrika 96 (2): 339–55. https://doi.org/10.1093/biomet/asp020.

Hurvich, Clifford M., Jeffrey S. Simonoff, and Chih-Ling Tsai. 1998. “Smoothing Parameter Selection in Nonparametric Regression Using an Improved Akaike Information Criterion.” Journal of the Royal Statistical Society. Series B (Statistical Methodology) 60 (2): 271–93. http://www.jstor.org/stable/2985940.

Hurvich, Clifford M., and Chih-Ling Tsai. 1989. “Regression and Time Series Model Selection in Small Samples.” Biometrika 76 (2): 297–307. https://doi.org/10.1093/biomet/76.2.297.

Imoto, Seiya, and Sadanori Konishi. 1999. “Estimation of B-Spline Nonparametric Regression Models Using Information.” http://www.stat.fi/isi99/proceedings/arkisto/varasto/sada0191.pdf.

Janson, Lucas, William Fithian, and Trevor J. Hastie. 2015. “Effective Degrees of Freedom: A Flawed Metaphor.” Biometrika 102 (2): 479–85. https://doi.org/10.1093/biomet/asv019.

Kato, Kengo. 2009. “On the Degrees of Freedom in Shrinkage Estimation.” Journal of Multivariate Analysis 100 (7): 1338–52. https://doi.org/10.1016/j.jmva.2008.12.002.

Kaufman, S., and S. Rosset. 2014. “When Does More Regularization Imply Fewer Degrees of Freedom? Sufficient Conditions and Counterexamples.” Biometrika 101 (4): 771–84. https://doi.org/10.1093/biomet/asu034.

Konishi, Sadanori, and G. Kitagawa. 2008. Information Criteria and Statistical Modeling. Springer Series in Statistics. New York: Springer.

Konishi, Sadanori, and Genshiro Kitagawa. 1996. “Generalised Information Criteria in Model Selection.” Biometrika 83 (4): 875–90. https://doi.org/10.1093/biomet/83.4.875.

———. 2003. “Asymptotic Theory for Information Criteria in Model Selection—Functional Approach.” Journal of Statistical Planning and Inference, C.R. Rao 80th Birthday Felicitation vol., Part IV, 114 (1–2): 45–61. https://doi.org/10.1016/S0378-3758(02)00462-7.

Le, Tri, and Bertrand Clarke. 2017. “A Bayes Interpretation of Stacking for $\mathcal{}M{}$-Complete and $\mathcal{}M{}$-Open Settings.” Bayesian Analysis 12 (3): 807–29. https://doi.org/10.1214/16-BA1023.

Leung, G., and A. R. Barron. 2006. “Information Theory and Mixing Least-Squares Regressions.” IEEE Transactions on Information Theory 52 (8): 3396–3410. https://doi.org/10.1109/TIT.2006.878172.

Li, Jonathan Q., and Andrew R. Barron. 2000. “Mixture Density Estimation.” In Advances in Neural Information Processing Systems 12, edited by S. A. Solla, T. K. Leen, and K. Müller, 279–85. MIT Press. http://papers.nips.cc/paper/1673-mixture-density-estimation.pdf.

Li, Ker-Chau. 1987. “Asymptotic Optimality for $C_p, C_L$, Cross-Validation and Generalized Cross-Validation: Discrete Index Set.” The Annals of Statistics 15 (3): 958–75. https://doi.org/10.1214/aos/1176350486.

Li, Runze, and Hua Liang. 2008. “Variable Selection in Semiparametric Regression Modeling.” The Annals of Statistics 36 (1): 261–86. https://doi.org/10.1214/009053607000000604.

Lim, Néhémy, and Johannes Lederer. 2016. “Efficient Feature Selection with Large and High-Dimensional Data,” September. http://arxiv.org/abs/1609.07195.

Machado, José A. F. 1993. “Robust Model Selection and M-Estimation.” Econometric Theory 9 (03): 478–93. https://doi.org/10.1017/S0266466600007775.

Massart, Pascal. 2000. “Some Applications of Concentration Inequalities to Statistics.” In Annales de La Faculté Des Sciences de Toulouse: Mathématiques, 9:245–303. http://archive.numdam.org/article/AFST_2000_6_9_2_245_0.pdf.

———. 2007. Concentration Inequalities and Model Selection: Ecole d’Eté de Probabilités de Saint-Flour XXXIII - 2003. Lecture Notes in Mathematics 1896. Berlin ; New York: Springer-Verlag. http://www.cmap.polytechnique.fr/~merlet/articles/probas_massart_stf03.pdf.

Murata, N., S. Yoshizawa, and S. Amari. 1994. “Network Information Criterion-Determining the Number of Hidden Units for an Artificial Neural Network Model.” IEEE Transactions on Neural Networks 5 (6): 865–72. https://doi.org/10.1109/72.329683.

Nishii, Ryuei. 1984. “Asymptotic Properties of Criteria for Selection of Variables in Multiple Regression.” The Annals of Statistics 12 (2): 758–65. https://doi.org/10.1214/aos/1176346522.

Qian, Guoqi, and Hans R. Künsch. 1998. “On Model Selection via Stochastic Complexity in Robust Linear Regression.” Journal of Statistical Planning and Inference 75 (1): 91–116. https://doi.org/10.1016/S0378-3758(98)00138-4.

Rao, C. R., and Y. Wu. 2001. “On Model Selection.” In Institute of Mathematical Statistics Lecture Notes - Monograph Series, 38:1–57. Beachwood, OH: Institute of Mathematical Statistics. http://projecteuclid.org/euclid.lnms/1215540960.

Rao, Radhakrishna, and Yuehua Wu. 1989. “A Strongly Consistent Procedure for Model Selection in a Regression Problem.” Biometrika 76 (2): 369–74. https://doi.org/10.1093/biomet/76.2.369.

Rissanen, J. 1978. “Modeling by Shortest Data Description.” Automatica 14 (5): 465–71. https://doi.org/10.1016/0005-1098(78)90005-5.

Saefken, Benjamin, Thomas Kneib, Clara-Sophie van Waveren, and Sonja Greven. 2014. “A Unifying Approach to the Estimation of the Conditional Akaike Information in Generalized Linear Mixed Models.” Electronic Journal of Statistics 8 (1): 201–25. https://doi.org/10.1214/14-EJS881.

Schwarz, Gideon. 1978. “Estimating the Dimension of a Model.” The Annals of Statistics 6 (2): 461–64. https://doi.org/10.1214/aos/1176344136.

Shen, Xiaotong, and Hsin-Cheng Huang. 2006. “Optimal Model Assessment, Selection, and Combination.” Journal of the American Statistical Association 101 (474): 554–68. https://doi.org/10.1198/016214505000001078.

Shen, Xiaotong, Hsin-Cheng Huang, and Jimmy Ye. 2004. “Adaptive Model Selection and Assessment for Exponential Family Distributions.” Technometrics 46 (3): 306–17. https://doi.org/10.1198/004017004000000338.

Shen, Xiaotong, and Jianming Ye. 2002. “Adaptive Model Selection.” Journal of the American Statistical Association 97 (457): 210–21. https://doi.org/10.1198/016214502753479356.

Shibata, Ritei. 1989. “Statistical Aspects of Model Selection.” In From Data to Model, edited by Professor Jan C. Willems, 215–40. Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-75007-6_5.

Stein, Charles M. 1981. “Estimation of the Mean of a Multivariate Normal Distribution.” The Annals of Statistics 9 (6): 1135–51. https://doi.org/10.1214/aos/1176345632.

Stone, M. 1979. “Comments on Model Selection Criteria of Akaike and Schwarz.” Journal of the Royal Statistical Society. Series B (Methodological) 41 (2): 276–78. http://www.jstor.org/stable/2985044.

———. 1977. “An Asymptotic Equivalence of Choice of Model by Cross-Validation and Akaike’s Criterion.” Journal of the Royal Statistical Society. Series B (Methodological) 39 (1): 44–47. http://www.stat.washington.edu/courses/stat527/s14/readings/Stone1977.pdf.

Sugiura, Nariaki. 1978. “Further Analysts of the Data by Akaike’ S Information Criterion and the Finite Corrections.” Communications in Statistics - Theory and Methods 7 (1): 13–26. https://doi.org/10.1080/03610927808827599.

Taddy, Matt. 2013. “One-Step Estimator Paths for Concave Regularization,” August. http://arxiv.org/abs/1308.5623.

Tharmaratnam, Kukatharmini, and Gerda Claeskens. 2013. “A Comparison of Robust Versions of the AIC Based on M-, S- and MM-Estimators.” Statistics 47 (1): 216–35. https://doi.org/10.1080/02331888.2011.568120.

Tibshirani, Robert. 1996. “Regression Shrinkage and Selection via the Lasso.” Journal of the Royal Statistical Society. Series B (Methodological) 58 (1): 267–88. http://statweb.stanford.edu/~tibs/lasso/lasso.pdf.

Tibshirani, Ryan J. 2015. “Degrees of Freedom and Model Search.” Statistica Sinica 25 (3): 1265–96. http://arxiv.org/abs/1402.1920.

Ye, Jianming. 1998. “On Measuring and Correcting the Effects of Data Mining and Model Selection.” Journal of the American Statistical Association 93 (441): 120–31. https://doi.org/10.1080/01621459.1998.10474094.

Yuan, Ming, and Yi Lin. 2006. “Model Selection and Estimation in Regression with Grouped Variables.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 68 (1): 49–67. https://doi.org/10.1111/j.1467-9868.2005.00532.x.

Zhang, Yiyun, Runze Li, and Chih-Ling Tsai. 2010. “Regularization Parameter Selections via Generalized Information Criterion.” Journal of the American Statistical Association 105 (489): 312–23. https://doi.org/10.1198/jasa.2009.tm08013.

Zou, Hui, Trevor Hastie, and Robert Tibshirani. 2007. “On the ‘Degrees of Freedom’ of the Lasso.” The Annals of Statistics 35 (5): 2173–92. https://doi.org/10.1214/009053607000000127.