Cross validation
September 5, 2016 — May 13, 2021
On substituting simulation for analysis in model selection, e.g. choosing the “right” regularization parameter for sparse regression.
The computationally expensive default option when your model doesn’t have any obvious shortcuts for complexity regularization, for example when AIC cannot be shown to work.
To learn: how this interacts with Bayesian inference.
1 Basic Cross Validation
🏗
2 Generalised Cross Validation
Why the name? It’s specialised cross-validation, AFAICS (Andrews 1991; Golub, Heath, and Wahba 1979; Li 1987).
🏗 Hat matrix, smoother matrix. Note comparative computational efficiency. Define hat matrix.
3 Bayesian Cross Validation
🏗️
4 What even is cross validation?
I always thought the answer here was simple: It is asymptotically equivalent to generalised Akaike information criteria (e.g. Stone (1977)). Related to bootstrap in various ways.
But there is other stuff going on. Here is an interesting sampling of opinions: Rob Tibshirani, Yuling Yao, and Aki Vehtari on cross validation.
4.1 Testing leakage
The vtreat introduction mentions their why you need hold-out article and also (Perlich and Świrszcz 2011):
Cross-methods such as cross-validation, and cross-prediction are effective tools for many machine learning, statistics, and data science related applications. They are useful for parameter selection, model selection, impact/target encoding of high cardinality variables, stacking models, and super learning. As cross-methods simulate access to an out of sample data set the same as the original data, they are more statistically efficient, lower variance, than partitioning training data into calibration/training/holdout sets. However, cross-methods do not satisfy the full exchangeability conditions that full hold-out methods have. This introduces some additional statistical trade-offs when using cross-methods, beyond the obvious increases in computational cost.
Specifically, cross-methods can introduce an information leak into the modelling process.