Model complexity penalties
Information criteria, degrees of freedom etc
April 22, 2015 — November 1, 2024
In classical statistics there are families of model complexity penalties, which are loosely collectively referred to as “Degrees of freedom” of a model.
🏗 Explain AIC, \(C_p\), SURE, and BIC-type degrees of freedom, and whatever variants there are out there.
There is a classical justification that seems to usually ground out in an argument about model marginal likelihood, which is a thing that is decreased by adding terms. Other types of complexity penalties are conceivable, however, minimum description length and singular learning theory.
Here I focus on the classical ones, since those are the terms I have needed to disambiguate. Complexity penalties crop up in model selection. (i.e. choosing the complexity of the model appropriate to your data.) Efron (2004) is an excellent introduction, compressing 30 years of theory into 2 pages. Massart (2000) seems more fashionable in flavour:
The reader who is not familiar with model selection via [complexity] penalization can legitimately ask the question: where does the idea of penalization come from? It is possible to answer this question at two different levels:
at some intuitive level by presenting the heuristics of one of the first criteria of this kind which has been introduced by Akaike (1973);
at some technical level by explaining why such a strategy of model selection has some chances to succeed.
(Yuan and Lin 2006) are an example of the kind of argumentation I need to use to use linear model approximation for general application of DOF in sparse model selection.
(Zou, Hastie, and Tibshirani 2007):
Degrees of freedom is a familiar phrase for many statisticians. In linear regression the degrees of freedom is the number of estimated predictors. Degrees of freedom is often used to quantify the model complexity of a statistical modelling procedure (Hastie and Tibshirani 1990). However, generally speaking, there is no exact correspondence between the degrees of freedom and the number of parameters in the model (Ye 1998). […] Stein’s unbiased risk estimation (SURE) theory (Stein 1981) gives a rigorous definition of the degrees of freedom for any fitting procedure. […] Efron (Efron 2004) showed that \(C_p\) is an unbiased estimator of the true prediction error, and in some settings it offers substantially better accuracy than cross-validation and related nonparametric methods. Thus degrees of freedom plays an important role in model assessment and selection. Donoho and Johnstone (Donoho and Johnstone 1995) used the SURE theory to derive the degrees of freedom of soft thresholding and showed that it leads to an adaptive wavelet shrinkage procedure called SureShrink. (Ye 1998) and (Shen and Ye 2002) showed that the degrees of freedom can capture the inherent uncertainty in modelling and frequentist model selection. Shen and Ye (Shen and Ye 2002) and (Shen, Huang, and Ye 2004) further proved that the degrees of freedom provides an adaptive model selection criterion that performs better than the fixed-penalty model selection criteria.
1 Information Criteria
Akaike and friends. With M-estimation, (e.g. maximum likelihood estimation and robust estimation) these are marvellous and general shortcuts to do model selection. (i.e. choosing the complexity of the model appropriate to your data) without resorting to computationally expensive cross validation.
For all of these, a thing called the number of effective degrees of freedom is important. There are several different definitions for that, and they only sometimes coincide, so I leave that for a different notebook. Claeskens and Hjort (2008) and Konishi and Kitagawa (2008) are probably canonical.
Information criteria can ideally do the same thing cross-validation (i.e. select ideal regularisation given possible models and data) at a small fraction of the computational cost. Indeed, they are asymptotically the same — see below.
To learn:
- How this interacts with robust estimators
- How to use AIC with nonparametric or high-dimensional methods (GIC)
- How it relates to minimum description length (e.g. Andrew R. Barron et al. (2008))
Influential current English-language texts in this area are Burnham and Anderson (2002), Claeskens and Hjort (2008) and Konishi and Kitagawa (2008). The first of these is highly cited and brought the AIC method into the mainstream in the West from where it had been on the specialised fringes. The latter two focus on extensions such as TIC and GIC.
🏗 general description.
🏗 clarify relationship to Minimum Description Length, Rissanen-style.
Bondell, Krishna, and Ghosh (2010):
In the literature, selection criteria are usually classified into two categories: consistent (e.g., the Bayesian information criterion BIC, Schwarz, 1978) and efficient (e.g., the Akaike information criterion AIC, Akaike, 1974; the generalized cross-validation GCV, Craven and Wahba, 1979). A consistent criterion identifies the true model with a probability that approaches 1 in large samples when a set of candidate models contains the true model. An efficient criterion selects the model so that its average squared error is asymptotically equivalent to the minimum offered by the candidate models when the true model is approximated by a family of candidate models. Detailed discussions on efficiency and consistency can be found in Shibata (1981, 1984), Li (1987), Shao (1997) and McQuarrie and Tsai (1998).
1.1 Akaike Information Criterion (AIC)
The classic.🏗
1.2 Takeuchi Information Criterion (TIC)
Apparently this one was influential in Japan, but untranslated into English, so only belatedly common in the west. Good explanations are in Claeskens and Hjort (2008) and Konishi and Kitagawa (2008). Relaxes the assumption that the model is Fisher efficient (i.e. that the true generating process is included in your model, and with enough data you’d discover that.)
1.3 Konishi and Kitegawa’s Generalised Information Criterion (GIC)
Taking information criteria to general (e.g. robust, penalised) M-estimation instead of purely ML estimation; also relaxing the assumption that we even have the “true” model in our class. (Konishi and Kitagawa (1996)); C&C Burman and Nolan (1995), probably others. In particular, you are no longer trying to fit the model by minimising least-squares errors, for example. Claeskens and Hjort (2008) mention the “Robustified Information Criterion” in passing, which may relate?
🏗 Explain my laborious reasoning that generalised Akaike information criteria for penalised regression don’t seem to work when the penalty term is not differentiable, (cross validation works fine though, and possibly also BIC) and the issues that therefore arise in model selection for such models in the sparse case.
1.4 Focussed information criterion (FIC)
Claeskens and Hjort define this (Claeskens and Hjort (2008), chapter 6):
The model selection methods presented earlier (such as AIC and the BIC) have one thing in common: they select one single ‘best model’, which should then be used to explain all aspects of the mechanisms underlying the data and predict all future data points. The tolerance discussion in chapter 5 showed that sometimes one model is best for estimating one type of estimand, whereas another model is best for another estimand. The point of view expressed via the [FIC] is that a ‘best model’ should depend on the parameter under focus, such as the mean, or the variance, or the particular covariate values etc Thus the FIC allows and encourages different models to be selected for different parameters of interest.
This sounds very logical; of course, then one must do more work to make it go.
1.5 Network information criterion
Murata, Yoshizawa, and Amari (1994): “an estimator of the expected loss of a loss function \(\ell(\theta)+\lambda H(\theta)\) where \(H(\theta)\) is a regularisation term”.
1.6 Regularization information criterion
Shibata (1989) - is this distinct from GIC?
1.7 Bootstrap information criterion
A compromise between the computational cheapness of information criteria and the practical simplicity of cross-validation.
Konishi and Kitagawa (2008) ch 8. See Claeskens and Hjort (2008) 6.3 for a bootstrap-FIC.
1.8 Consistency of model order selected - AIC
Akaike Information criteria are not asymptotically consistent (see Konishi and Kitagawa (2008).) in the sense that if there is a true model, you do not asymptotically select in the large sample limit with P=1. However, the distribution of model orders does not get worse as n increases. Burnham and Anderson (2002) 6.3 and Konishi and Kitagawa (2008) 3.5.2 discuss this; In a sense it would be surprising if it did do especially well in selecting model order; since our criteria is rather designed to minimise prediction error, not model selection error. Model order is more or less a nuisance parameter in this framework.
TBC.
1.9 Cross-validation equivalence
Konishi and Kitagawa (2008), 10.1.4 discuss the asymptotic equivalence of AIC/TIC/GIC and cross validation under various circumstances, attributing the equivalence results to Stone (1977) and Shibata (1989). Claeskens and Hjort (2008) proves a similar result.
1.10 Automatic GIC
🏗; I know that Konishi and Kitagawa (1996) give formulae for loss functions for ANY M-estimation and penalisation procedure, but in general the degrees of freedom matrix trace calculation is nasty, and only in-principle estimable from the data, requiring a matrix product of the hessian at every data point. This is not necessarily computationally tractable - I know of formulae only for GLMs and robust regression with \(\ell_2\) penalties. Can we get such penalties for more general ML fits?
1.11 GIC & the LASSO
I thought this didn’t work because we needed the second derivative of the penalty; but see Bondell, Krishna, and Ghosh (2010).
1.12 Information criteria at scale
Big-data information criteria. AIC is already computationally cheaper than cross validation. What about when my data is so large that I would like to select my mode before looking at all of it with such-and-such a guarantee of goodness? Can I do AIC at scale? if I am fitting a model using SGD, can I estimate my model order using partial data? How? I’m interested in doing this in a way that preserves the property of being computationally cheaper than cross-validating.
Here’s an example… Bondell, Krishna, and Ghosh (2010):
In order to avoid complete enumeration of all possible \(2^{p+q}\) models, Wolfinger (1993) and Diggle, Liang and Zeger (1994) recommended the Restricted Information Criterion (denoted by REML.IC), in that, by using the most complex mean structure, selection is first performed on the variance-covariance structure by computing the AIC and/or BIC. Given the best covariance structure, selection is then performed on the fixed effects. Alternatively, Pu and Niu (2006) proposed the EGIC (Extended GIC), where using the BIC, selection is first performed on the fixed effects by including all of the random effects into the model. Once the fixed effect structure is chosen, selection is then performed on the random effects.
In general I’d like to avoid enumerating the models as much as possible and simply select relevant predictors with high probability, compressive-sensing style.
2 Consistent: Bayesian Information Criteria
a.k.a. Schwarz Information Criterion. Also co-invented by the unstoppable Akaike. (Hirotugu Akaike 1978; Schwarz 1978)
This is a different family to the original AIC. This has a justification in terms of MDL and of Bayes risk? Different regularity conditions, something something…
How would this work with regularisation? Apparently Machado (1993) extends the setup to robust inference, much as the GIC extends the AIC. Claeskens and Hjort (2008) give an easy summary and more general settings.
3 Consistent and/or efficient: Nishii’s Generalised Information Criterion
Nishii (1984), commended by Zhang, Li, and Tsai (2010) as a unifying formalism for these efficient/consistent others, includes efficient and consistent-type information penalties as special cases I don’t know much about this.
4 Quasilikelihood
See QIC.
5 Local Learning coefficient
Looks vaguely related, but does not select different models so much as quantify the current state of a model. Something algebraic geometry something something singular learning theory.