Multiple testing

How to go data mining for models without “dredging” for models. (accidentally or otherwise) If you keep on testing models until you find some that fit (which you usually will) how do you know that the fit is in some sense interesting? How sharp will your conclusions be? How does it work when you are testing against a possibly uncountable continuum of hypotheses? (One perspective on sparsity penalties is precisely this, I am told.)

Model selection is this writ small - when you are testing how many variables to include in your model.

In modern high-dimensional models, where you have potentially many explanatory variables, the question of how to handle the combinatorial explosion of possible variables to include, this can also be considered a multiple testing problem. We tend to regard this as a smoothing and model selection problem though.

This all gets more complicated when you think about many people testing many hypothesese in many different experiments then you are going to run into many more issues than just these - also publication bias and suchlike.

Suggestive connection:

Moritz Hardt, The machine learning leaderboard problem:

In this post, I will describe a method to climb the public leaderboard without even looking at the data. The algorithm is so simple and natural that an unwitting analyst might just run it. We will see that in Kaggle’s famous Heritage Health Prize competition this might have propelled a participant from rank around 150 into the top 10 on the public leaderboard without making progress on the actual problem.

I get super excited. I keep climbing the leaderboard! Who would’ve thought that this machine learning thing was so easy? So, I go write a blog post on Medium about Big Data and score a job at DeepCompeting.ly, the latest data science startup in the city. Life is pretty sweet. I pick up indoor rock climbing, sign up for wood working classes; I read Proust and books about espresso. Two months later the competition closes and Kaggle releases the final score. What an embarrassment! Wacky boosting did nothing whatsoever on the final test set. I get fired from DeepCompeting.ly days before the buyout. My spouse dumps me. The lease expires. I get evicted from my apartment in the Mission. Inevitably, I hike the Pacific Crest Trail and write a novel about it.

See BlHa15 and DFHP15 for more of that.

P-value hacking

False discovery rate

FDR control…

Familywise error rate

Šidák correction, Bonferroni correction…

Post selection inference

See post selection inference.

Misc applied

Evan’s Awesome A/B Tools

http://kadavy.net/blog/posts/aa-testing/ http://businessofsoftware.org/2013/06/jason-cohen-ceo-wp-engine-why-data-can-make-you-do-the-wrong-thing/ http://www.evanmiller.org/the-low-base-rate-problem.html

Abramovich, Felix, Yoav Benjamini, David L. Donoho, and Iain M. Johnstone. 2006. “Adapting to Unknown Sparsity by Controlling the False Discovery Rate.” The Annals of Statistics 34 (2): 584–653. https://doi.org/10.1214/009053606000000074.

Aickin, M, and H Gensler. 1996. “Adjusting for Multiple Testing When Reporting Research Results: The Bonferroni Vs Holm Methods.” American Journal of Public Health 86 (5): 726–28. https://doi.org/10.2105/AJPH.86.5.726.

Ansley, Craig F., and Robert Kohn. 1985. “Estimation, Filtering, and Smoothing in State Space Models with Incompletely Specified Initial Conditions.” The Annals of Statistics 13 (4): 1286–1316. https://doi.org/10.1214/aos/1176349739.

Arnold, Taylor B., and John W. Emerson. 2011. “Nonparametric Goodness-of-Fit Tests for Discrete Null Distributions.” The R Journal 3 (2): 34–39. http://journal.r-project.org/archive/2011-2/RJournal_2011-2_Arnold+Emerson.pdf.

Bach, Francis. 2009. “Model-Consistent Sparse Estimation Through the Bootstrap.” arXiv:0901.3202 [Cs, Stat]. https://hal.archives-ouvertes.fr/hal-00354771/document.

Barber, Rina Foygel, and Emmanuel J. Candès. 2015. “Controlling the False Discovery Rate via Knockoffs.” The Annals of Statistics 43 (5): 2055–85. https://doi.org/10.1214/15-AOS1337.

Bashtannyk, David M., and Rob J. Hyndman. 2001. “Bandwidth Selection for Kernel Conditional Density Estimation.” Computational Statistics & Data Analysis 36 (3): 279–98. https://doi.org/10.1016/S0167-9473(00)00046-3.

Bassily, Raef, Kobbi Nissim, Adam Smith, Thomas Steinke, Uri Stemmer, and Jonathan Ullman. 2015. “Algorithmic Stability for Adaptive Data Analysis,” November. http://arxiv.org/abs/1511.02513.

Benjamini, Yoav. 2010. “Simultaneous and Selective Inference: Current Successes and Future Challenges.” Biometrical Journal 52 (6): 708–21. https://doi.org/10.1002/bimj.200900299.

Benjamini, Yoav, and Yulia Gavrilov. 2009. “A Simple Forward Selection Procedure Based on False Discovery Rate Control.” The Annals of Applied Statistics 3 (1): 179–98. https://doi.org/10.1214/08-AOAS194.

Benjamini, Yoav, and Yosef Hochberg. 1995. “Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing.” Journal of the Royal Statistical Society. Series B (Methodological) 57 (1): 289–300. http://www.jstor.org/stable/2346101.

Benjamini, Yoav, and Daniel Yekutieli. 2005. “False Discovery Rate–Adjusted Multiple Confidence Intervals for Selected Parameters.” Journal of the American Statistical Association 100 (469): 71–81. https://doi.org/10.1198/016214504000001907.

Berk, Richard, Lawrence Brown, Andreas Buja, Kai Zhang, and Linda Zhao. 2013. “Valid Post-Selection Inference.” The Annals of Statistics 41 (2): 802–37. https://doi.org/10.1214/12-AOS1077.

Blum, Avrim, and Moritz Hardt. 2015. “The Ladder: A Reliable Leaderboard for Machine Learning Competitions,” February. http://arxiv.org/abs/1502.04585.

Buckland, S. T., K. P. Burnham, and N. H. Augustin. 1997. “Model Selection: An Integral Part of Inference.” Biometrics 53 (2): 603–18. https://doi.org/10.2307/2533961.

Bunea, Florentina. 2004. “Consistent Covariate Selection and Post Model Selection Inference in Semiparametric Regression.” The Annals of Statistics 32 (3): 898–927. https://doi.org/10.1214/009053604000000247.

Burnham, Kenneth P., and David R. Anderson. 2004. “Multimodel Inference Understanding AIC and BIC in Model Selection.” Sociological Methods & Research 33 (2): 261–304. https://doi.org/10.1177/0049124104268644.

Bühlmann, Peter, and Sara van de Geer. 2015. “High-Dimensional Inference in Misspecified Linear Models” 9 (1): 1449–73. https://doi.org/10.1214/15-EJS1041.

Cai, T. Tony, and Wenguang Sun. 2017. “Large-Scale Global and Simultaneous Inference: Estimation and Testing in Very High Dimensions.” Annual Review of Economics 9 (1): 411–39. https://doi.org/10.1146/annurev-economics-063016-104355.

Candès, Emmanuel J., Yingying Fan, Lucas Janson, and Jinchi Lv. 2016. “Panning for Gold: Model-Free Knockoffs for High-Dimensional Controlled Variable Selection.” arXiv Preprint arXiv:1610.02351. https://arxiv.org/abs/1610.02351.

Candès, Emmanuel J., J. Romberg, and T. Tao. 2006. “Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information.” IEEE Transactions on Information Theory 52 (2): 489–509. https://doi.org/10.1109/TIT.2005.862083.

Candès, Emmanuel J., Michael B. Wakin, and Stephen P. Boyd. 2008. “Enhancing Sparsity by Reweighted ℓ 1 Minimization.” Journal of Fourier Analysis and Applications 14 (5-6): 877–905. https://doi.org/10.1007/s00041-008-9045-x.

Cavanaugh, Joseph E. 1997. “Unifying the Derivations for the Akaike and Corrected Akaike Information Criteria.” Statistics & Probability Letters 33 (2): 201–8. https://doi.org/10.1016/S0167-7152(96)00128-9.

Cavanaugh, Joseph E., and Robert H. Shumway. 1998. “An Akaike Information Criterion for Model Selection in the Presence of Incomplete Data.” Journal of Statistical Planning and Inference 67 (1): 45–65. https://doi.org/10.1016/S0378-3758(97)00115-8.

Chernozhukov, Victor, Christian Hansen, and Martin Spindler. 2015. “Valid Post-Selection and Post-Regularization Inference: An Elementary, General Approach.” Annual Review of Economics 7 (1): 649–88. https://doi.org/10.1146/annurev-economics-012315-015826.

Claeskens, Gerda, Tatyana Krivobokova, and Jean D. Opsomer. 2009. “Asymptotic Properties of Penalized Spline Estimators.” Biometrika 96 (3): 529–44. https://doi.org/10.1093/biomet/asp035.

Clevenson, M. Lawrence, and James V. Zidek. 1975. “Simultaneous Estimation of the Means of Independent Poisson Laws.” Journal of the American Statistical Association 70 (351a): 698–705. https://doi.org/10.1080/01621459.1975.10482497.

Collings, Bruce J., and Barry H. Margolin. 1985. “Testing Goodness of Fit for the Poisson Assumption When Observations Are Not Identically Distributed.” Journal of the American Statistical Association 80 (390): 411–18. https://doi.org/10.2307/2287906.

Cox, D. R., and H. S. Battey. 2017. “Large Numbers of Explanatory Variables, a Semi-Descriptive Analysis.” Proceedings of the National Academy of Sciences 114 (32): 8592–5. https://doi.org/10.1073/pnas.1703764114.

Cule, Erika, Paolo Vineis, and Maria De Iorio. 2011. “Significance Testing in Ridge Regression for Genetic Data.” BMC Bioinformatics 12 (September): 372. https://doi.org/10.1186/1471-2105-12-372.

Dai, Ran, and Rina Foygel Barber. 2016. “The Knockoff Filter for FDR Control in Group-Sparse and Multitask Regression.” arXiv Preprint arXiv:1602.03589. https://arxiv.org/abs/1602.03589.

DasGupta, Anirban. 2008. Asymptotic Theory of Statistics and Probability. Springer Texts in Statistics. New York: Springer New York. http://link.springer.com/10.1007/978-0-387-75971-5.

Delaigle, Aurore, Peter Hall, and Alexander Meister. 2008. “On Deconvolution with Repeated Measurements.” The Annals of Statistics 36 (2): 665–85. https://doi.org/10.1214/009053607000000884.

Dezeure, Ruben, Peter Bühlmann, Lukas Meier, and Nicolai Meinshausen. 2014. “High-Dimensional Inference: Confidence Intervals, P-Values and R-Software Hdi,” August. http://arxiv.org/abs/1408.4026.

Donoho, David L., and Iain M. Johnstone. 1995. “Adapting to Unknown Smoothness via Wavelet Shrinkage.” Journal of the American Statistical Association 90 (432): 1200–1224. https://doi.org/10.1080/01621459.1995.10476626.

Donoho, David L., Iain M. Johnstone, Gerard Kerkyacharian, and Dominique Picard. 1995. “Wavelet Shrinkage: Asymptopia?” Journal of the Royal Statistical Society. Series B (Methodological) 57 (2): 301–69. http://statweb.stanford.edu/~imj/WEBLIST/1995/asymp.pdf.

Dwork, Cynthia, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Aaron Roth. 2015a. “The Reusable Holdout: Preserving Validity in Adaptive Data Analysis.” Science 349 (6248): 636–38. https://doi.org/10.1126/science.aaa9375.

Dwork, Cynthia, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Aaron Leon Roth. 2015b. “Preserving Statistical Validity in Adaptive Data Analysis.” In Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing - STOC ’15, 117–26. Portland, Oregon, USA: ACM Press. https://doi.org/10.1145/2746539.2746580.

Efird, Jimmy Thomas, and Susan Searles Nielsen. 2008. “A Method to Compute Multiplicity Corrected Confidence Intervals for Odds Ratios and Other Relative Effect Estimates.” International Journal of Environmental Research and Public Health 5 (5): 394–98. http://www.mdpi.com/1660-4601/5/5/394/htm.

Efron, B. 1979. “Bootstrap Methods: Another Look at the Jackknife.” The Annals of Statistics 7 (1): 1–26. https://doi.org/10.1214/aos/1176344552.

Efron, Bradley. 2004a. “Selection and Estimation for Large-Scale Simultaneous Inference.” http://statweb.stanford.edu/~ckirby/brad/papers/2004Selection.pdf.

———. 2007. “Doing Thousands of Hypothesis Tests at the Same Time.” Metron - International Journal of Statistics LXV (1): 3–21. http://statweb.stanford.edu/~ckirby/brad/papers/2007DoingThousands.pdf.

———. 2010a. “The Future of Indirect Evidence.” Statistical Science 25 (2): 145–57. https://doi.org/10.1214/09-STS308.

———. 1986. “How Biased Is the Apparent Error Rate of a Prediction Rule?” Journal of the American Statistical Association 81 (394): 461–70. https://doi.org/10.1080/01621459.1986.10478291.

———. 2004b. “The Estimation of Prediction Error.” Journal of the American Statistical Association 99 (467): 619–32. https://doi.org/10.1198/016214504000000692.

———. 2008. “Simultaneous Inference: When Should Hypothesis Testing Problems Be Combined?” The Annals of Applied Statistics 2 (1): 197–223. https://doi.org/10.1214/07-AOAS141.

———. 2009. “Empirical Bayes Estimates for Large-Scale Prediction Problems.” Journal of the American Statistical Association 104 (487): 1015–28. https://doi.org/10.1198/jasa.2009.tm08523.

———. 2010b. “Correlated Z-Values and the Accuracy of Large-Scale Statistical Estimates.” Journal of the American Statistical Association 105 (491): 1042–55. https://doi.org/10.1198/jasa.2010.tm09129.

———. 2013. Large-Scale Inference: Empirical Bayes Methods for Estimation, Testing, and Prediction. Reprint edition. Cambridge: Cambridge University Press.

Evans, Robin J., and Vanessa Didelez. n.d. “Recovering from Selection Bias Using Marginal Structure in Discrete Models.” Accessed July 18, 2015. http://www.homepages.ucl.ac.uk/~ucgtrbd/uai2015_causal/papers/evans.pdf.

Ewald, Karl, and Ulrike Schneider. 2015. “Confidence Sets Based on the Lasso Estimator,” July. http://arxiv.org/abs/1507.05315.

Fan, Jianqing, and Runze Li. 2001. “Variable Selection via Nonconcave Penalized Likelihood and Its Oracle Properties.” Journal of the American Statistical Association 96 (456): 1348–60. https://doi.org/10.1198/016214501753382273.

Fan, Jianqing, and Jinchi Lv. 2010. “A Selective Overview of Variable Selection in High Dimensional Feature Space.” Statistica Sinica 20 (1): 101–48. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3092303/.

Franz, Volker H., and Ulrike von Luxburg. 2014. “Unconscious Lie Detection as an Example of a Widespread Fallacy in the Neurosciences,” July. http://arxiv.org/abs/1407.4240.

Friedman, Jerome, Trevor Hastie, and Rob Tibshirani. 2010. “Regularization Paths for Generalized Linear Models via Coordinate Descent.” Journal of Statistical Software 33 (1): 1–22. https://doi.org/10.18637/jss.v033.i01.

Garreau, Damien, Rémi Lajugie, Sylvain Arlot, and Francis Bach. 2014. “Metric Learning for Temporal Sequence Alignment.” In Advances in Neural Information Processing Systems 27, edited by Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, 1817–25. Curran Associates, Inc. http://papers.nips.cc/paper/5383-metric-learning-for-temporal-sequence-alignment.pdf.

Geer, Sara van de, Peter Bühlmann, Ya’acov Ritov, and Ruben Dezeure. 2014. “On Asymptotically Optimal Confidence Regions and Tests for High-Dimensional Models.” The Annals of Statistics 42 (3): 1166–1202. https://doi.org/10.1214/14-AOS1221.

Geer, Sara van de, and Johannes Lederer. 2011. “The Lasso, Correlated Design, and Improved Oracle Inequalities,” July. http://arxiv.org/abs/1107.0189.

Gelman, Andrew, and Eric Loken. 2014. “The Statistical Crisis in Science.” American Scientist 102 (6): 460. https://doi.org/10.1511/2014.111.460.

Genovese, Christopher, and Larry Wasserman. 2008. “Adaptive Confidence Bands.” The Annals of Statistics 36 (2): 875–905. https://doi.org/10.1214/07-AOS500.

Gonçalves, Sílvia, and Halbert White. 2004. “Maximum Likelihood and the Bootstrap for Nonlinear Dynamic Models.” Journal of Econometrics 119 (1): 199–219. https://doi.org/10.1016/S0304-4076(03)00204-5.

Hardt, Moritz, and Jonathan Ullman. 2014. “Preventing False Discovery in Interactive Data Analysis Is Hard.” In Proceedings of the 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, 454–63. FOCS ’14. Washington, DC, USA: IEEE Computer Society. https://doi.org/10.1109/FOCS.2014.55.

Hesterberg, Tim, Nam Hee Choi, Lukas Meier, and Chris Fraley. 2008. “Least Angle and ℓ1 Penalized Regression: A Review.” Statistics Surveys 2: 61–93. https://doi.org/10.1214/08-SS035.

Hjort, Nils Lid. 1992. “On Inference in Parametric Survival Data Models.” International Statistical Review / Revue Internationale de Statistique 60 (3): 355–87. https://doi.org/10.2307/1403683.

Hjort, Nils Lid, Mike West, and Sue Leurgans. 1992. “Semiparametric Estimation of Parametric Hazard Rates.” In Survival Analysis: State of the Art, edited by John P. Klein and Prem K. Goel, 211–36. Nato Science 211. Springer Netherlands. http://link.springer.com/chapter/10.1007/978-94-015-7983-4_13.

Hjort, N. L., and M. C. Jones. 1996. “Locally Parametric Nonparametric Density Estimation.” The Annals of Statistics 24 (4): 1619–47. https://doi.org/10.1214/aos/1032298288.

Hurvich, Clifford M., and Chih-Ling Tsai. 1989. “Regression and Time Series Model Selection in Small Samples.” Biometrika 76 (2): 297–307. https://doi.org/10.1093/biomet/76.2.297.

Ichimura, Hidehiko. 1993. “Semiparametric Least Squares (SLS) and Weighted SLS Estimation of Single-Index Models.” Journal of Econometrics 58 (1–2): 71–120. https://doi.org/10.1016/0304-4076(93)90114-K.

Ioannidis, John P. 2005. “Why Most Published Research Findings Are False.” PLoS Medicine 2 (8): –124. https://doi.org/10.1371/journal.pmed.0020124.

Iyengar, Satish, and Joel B. Greenhouse. 1988. “Selection Models and the File Drawer Problem.” Statistical Science 3 (1): 109–17. https://doi.org/10.1214/ss/1177013012.

Janková, Jana, and Sara van de Geer. 2015. “Honest Confidence Regions and Optimality in High-Dimensional Precision Matrix Estimation,” July. http://arxiv.org/abs/1507.02061.

Janson, Lucas, William Fithian, and Trevor J. Hastie. 2015. “Effective Degrees of Freedom: A Flawed Metaphor.” Biometrika 102 (2): 479–85. https://doi.org/10.1093/biomet/asv019.

Kaufman, S., and S. Rosset. 2014. “When Does More Regularization Imply Fewer Degrees of Freedom? Sufficient Conditions and Counterexamples.” Biometrika 101 (4): 771–84. https://doi.org/10.1093/biomet/asu034.

Konishi, Sadanori, and Genshiro Kitagawa. 1996. “Generalised Information Criteria in Model Selection.” Biometrika 83 (4): 875–90. https://doi.org/10.1093/biomet/83.4.875.

Korattikara, Anoop, Yutian Chen, and Max Welling. 2015. “Sequential Tests for Large-Scale Learning.” Neural Computation 28 (1): 45–70. https://doi.org/10.1162/NECO_a_00796.

Künsch, Hans Rudolf. 1986. “Discrimination Between Monotonic Trends and Long-Range Dependence.” Journal of Applied Probability 23 (4): 1025–30.

Lancichinetti, Andrea, M. Irmak Sirer, Jane X. Wang, Daniel Acuna, Konrad Körding, and Luís A. Nunes Amaral. 2015. “High-Reproducibility and High-Accuracy Method for Automated Topic Classification.” Physical Review X 5 (1): 011007. https://doi.org/10.1103/PhysRevX.5.011007.

Lavergne, Pascal, Samuel Maistre, and Valentin Patilea. 2015. “A Significance Test for Covariates in Nonparametric Regression.” Electronic Journal of Statistics 9: 643–78. https://doi.org/10.1214/15-EJS1005.

Lazzeroni, L C, and A Ray. 2012. “The Cost of Large Numbers of Hypothesis Tests on Power, Effect Size and Sample Size.” Molecular Psychiatry 17 (1): 108–14. https://doi.org/10.1038/mp.2010.117.

Lee, Jason D., Dennis L. Sun, Yuekai Sun, and Jonathan E. Taylor. 2013. “Exact Post-Selection Inference, with Application to the Lasso,” November. http://arxiv.org/abs/1311.6238.

Li, Runze, and Hua Liang. 2008. “Variable Selection in Semiparametric Regression Modeling.” The Annals of Statistics 36 (1): 261–86. https://doi.org/10.1214/009053607000000604.

Lockhart, Richard, Jonathan Taylor, Ryan J. Tibshirani, and Robert Tibshirani. 2014. “A Significance Test for the Lasso.” The Annals of Statistics 42 (2): 413–68. https://doi.org/10.1214/13-AOS1175.

Meinshausen, Nicolai. 2006. “False Discovery Control for Multiple Tests of Association Under General Dependence.” Scandinavian Journal of Statistics 33 (2): 227–37. https://doi.org/10.1111/j.1467-9469.2005.00488.x.

———. 2007. “Relaxed Lasso.” Computational Statistics & Data Analysis 52 (1): 374–93. https://doi.org/10.1016/j.csda.2006.12.019.

———. 2014. “Group Bound: Confidence Intervals for Groups of Variables in Sparse High Dimensional Regression Without Assumptions on the Design.” Journal of the Royal Statistical Society: Series B (Statistical Methodology), November, n/a–n/a. https://doi.org/10.1111/rssb.12094.

Meinshausen, Nicolai, and Peter Bühlmann. 2006. “High-Dimensional Graphs and Variable Selection with the Lasso.” The Annals of Statistics 34 (3): 1436–62. https://doi.org/10.1214/009053606000000281.

———. 2005. “Lower Bounds for the Number of False Null Hypotheses for Multiple Testing of Associations Under General Dependence Structures.” Biometrika 92 (4): 893–907. https://doi.org/10.1093/biomet/92.4.893.

———. 2010. “Stability Selection.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 72 (4): 417–73. https://doi.org/10.1111/j.1467-9868.2010.00740.x.

Meinshausen, Nicolai, Lukas Meier, and Peter Bühlmann. 2009. “P-Values for High-Dimensional Regression.” Journal of the American Statistical Association 104 (488): 1671–81. https://doi.org/10.1198/jasa.2009.tm08647.

Meinshausen, Nicolai, and John Rice. 2006. “Estimating the Proportion of False Null Hypotheses Among a Large Number of Independently Tested Hypotheses.” The Annals of Statistics 34 (1): 373–93. https://doi.org/10.1214/009053605000000741.

Meinshausen, Nicolai, and Bin Yu. 2009. “Lasso-Type Recovery of Sparse Representations for High-Dimensional Data.” The Annals of Statistics 37 (1): 246–70. https://doi.org/10.1214/07-AOS582.

Müller, Andreas C., and Sven Behnke. 2014. “Pystruct - Learning Structured Prediction in Python.” Journal of Machine Learning Research 15: 2055–60. http://jmlr.org/papers/v15/mueller14a.html.

Nickl, Richard, and Sara van de Geer. 2013. “Confidence Sets in Sparse Regression.” The Annals of Statistics 41 (6): 2852–76. https://doi.org/10.1214/13-AOS1170.

Noble, William Stafford. 2009. “How Does Multiple Testing Correction Work?” Nature Biotechnology 27 (12): 1135–7. https://doi.org/10.1038/nbt1209-1135.

Ramsey, Joseph, Madelyn Glymour, Ruben Sanchez-Romero, and Clark Glymour. 2017. “A Million Variables and More: The Fast Greedy Equivalence Search Algorithm for Learning High-Dimensional Graphical Causal Models, with an Application to Functional Magnetic Resonance Images.” International Journal of Data Science and Analytics 3 (2): 121–29. https://doi.org/10.1007/s41060-016-0032-z.

Rosset, Saharon, and Ji Zhu. 2007. “Piecewise Linear Regularized Solution Paths.” The Annals of Statistics 35 (3): 1012–30. https://doi.org/10.1214/009053606000001370.

Rothman, K. J. 1990. “No Adjustments Are Needed for Multiple Comparisons.” Epidemiology (Cambridge, Mass.) 1 (1): 43–46. http://journals.lww.com/epidem/Fulltext/1990/01000/No_Adjustments_Are_Needed_for_Multiple.10.aspx.

Rzhetsky, Andrey, Jacob G. Foster, Ian T. Foster, and James A. Evans. 2015. “Choosing Experiments to Accelerate Collective Discovery.” Proceedings of the National Academy of Sciences 112 (47): 14569–74. https://doi.org/10.1073/pnas.1509757112.

Siegmund, David O., and Jian Li. 2014. “Higher Criticism: P-Values and Criticism,” November. http://arxiv.org/abs/1411.1437.

Stone, M. 1977. “An Asymptotic Equivalence of Choice of Model by Cross-Validation and Akaike’s Criterion.” Journal of the Royal Statistical Society. Series B (Methodological) 39 (1): 44–47. http://www.stat.washington.edu/courses/stat527/s14/readings/Stone1977.pdf.

Storey, John D. 2002. “A Direct Approach to False Discovery Rates.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 64 (3): 479–98. https://doi.org/10.1111/1467-9868.00346.

Su, Weijie, Malgorzata Bogdan, and Emmanuel J. Candès. 2015. “False Discoveries Occur Early on the Lasso Path,” November. http://arxiv.org/abs/1511.01957.

Taddy, Matt. 2013. “One-Step Estimator Paths for Concave Regularization,” August. http://arxiv.org/abs/1308.5623.

Tansey, Wesley, Oluwasanmi Koyejo, Russell A. Poldrack, and James G. Scott. 2014. “False Discovery Rate Smoothing,” November. http://arxiv.org/abs/1411.6144.

Tansey, Wesley, Oscar Hernan Madrid Padilla, Arun Sai Suggala, and Pradeep Ravikumar. 2015. “Vector-Space Markov Random Fields via Exponential Families.” In Journal of Machine Learning Research, 684–92. http://jmlr.csail.mit.edu/proceedings/papers/v37/tansey15.html.

Taylor, Jonathan, Richard Lockhart, Ryan J. Tibshirani, and Robert Tibshirani. 2014. “Exact Post-Selection Inference for Forward Stepwise and Least Angle Regression,” January. http://arxiv.org/abs/1401.3889.

Tibshirani, Ryan J. 2014. “A General Framework for Fast Stagewise Algorithms,” August. http://arxiv.org/abs/1408.5801.

Tibshirani, Ryan J., Alessandro Rinaldo, Robert Tibshirani, and Larry Wasserman. 2015. “Uniform Asymptotic Inference and the Bootstrap After Model Selection,” June. http://arxiv.org/abs/1506.06266.

Wasserman, Larry, and Kathryn Roeder. 2009. “High-Dimensional Variable Selection.” Annals of Statistics 37 (5A): 2178–2201. https://doi.org/10.1214/08-AOS646.

Zhang, Cun-Hui, and Stephanie S. Zhang. 2014. “Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 76 (1): 217–42. https://doi.org/10.1111/rssb.12026.

Zou, Hui, Trevor Hastie, and Robert Tibshirani. 2007. “On the ‘Degrees of Freedom’ of the Lasso.” The Annals of Statistics 35 (5): 2173–92. https://doi.org/10.1214/009053607000000127.