# Post-selection inference

Adaptive data analysis without cheating

August 20, 2017 — August 20, 2017

After you have interfered with the purity of your data by model selection, how do you do inference? 🏗

Tricky in general. There is an overview by Cosma Shalizi which mostly comes down in favour of data-splitting whose complications are least extravagant. But this requires a new data holdout for each successive inference, which is still not ideal if you have limited data.

Here’s an approach for more extended chains of inference, from the school known as *adaptie data analysis*: The reusable holdout: Preserving validity in adaptive data analysis which, like everything these days, uses differential privacy methods. Aaron Roth’s explanation is pretty clear. Soon I will analyse fruit smoothies as differential privacy for bananas.

Some models have special powers in this regard, e.g. LASSO-style approaches. Much to do here, but for now there is a simple, relaxed walk-through by Peter Ellis on post-regression inference using the LASSO for COVID-19 and hydroxychloroquine with some side glances at T. J. Hastie, Tibshirani, Rob, and Wainwright (2015).

## 1 References

*arXiv:1511.02513 [Cs]*.

*The Annals of Statistics*.

*The Annals of Statistics*.

*Annual Review of Economics*.

*Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing - STOC ’15*.

*Communications of the ACM*.

*Proceedings of the 2014 IEEE 55th Annual Symposium on Foundations of Computer Science*. FOCS ’14.

*Statistical Learning with Sparsity: The Lasso and Generalizations*.

*The Elements of Statistical Learning: Data Mining, Inference and Prediction*.

*arXiv:1909.03577 [Cs, Stat]*.

*arXiv:1311.6238 [Math, Stat]*.

*arXiv:1401.3889 [Stat]*.