Tests, statistical

Maybe also design of experiments while we are here?



The mathematics of the last century worth of experiment design. This is about the classical framing, where you think about designing and running experiments and deciding if can reasonably be construed to be true or not, then go home. There are many elaborations of this approach in the modern world. For example, we examine large numbers of hypotheses at once under multiple testing. It can be considered as part of model selection question, or maybe even made particularly nifty using sparse model selection. Probably the most interesting family of tests are tests of conditional independence, especially multiple version of those.

But the classic simplest case has things to teach us. Probably the least sexy thing in statistics and as such, usually taught by the least interesting professor in the department, or at least one who couldn’t find an interesting enough excuse to get out of it, which is a strong correlate. Said professor will then teach it to you as if you were in turn the least interesting student in the school, and so the spiral of boredom winds on. Anyhow, it turns out there are powerful tools within this area, and also instructive examples.

tl;dr classic statistical tests are linear models where your goal decide if a coefficient should be regarded as non-zero or not. Jonas Kristoffer Lindeløv explains this perspective: Common statistical tests are linear models. FWIW I found that perspective to be a real 💡 moment.

Daniel Lakens asks Do You Really Want to Test a Hypothesis?:

The lecture “Do You Really Want to Test a Hypothesis?” aims to explain which question a hypothesis tests asks, and discusses when a hypothesis tests answers a question you are interested in. It is very easy to say what not to do, or to point out what is wrong with statistical tools. Statistical tools are very limited, even under ideal circumstances. It’s more difficult to say what you can do. If you follow my work, you know that this latter question is what I spend my time on. Instead of telling you optional stopping can’t be done because it is p-hacking, I explain how you can do it correctly through sequential analysis. Instead of telling you it is wrong to conclude the absence of an effect from p > 0.05, I explain how to use equivalence testing­­. Instead of telling you p-values are the devil, I explain how they answer a question you might be interested in when used well. Instead of saying preregistration is redundant, I explain from which philosophy of science preregistration has value. And instead of saying we should abandon hypothesis tests, I try to explain in this video how to use them wisely. This is all part of my ongoing #JustifyEverything educational tour. I think it is a reasonable expectation that researchers should be able to answer at least a simple ‘why’ question if you ask why they use a specific tool, or use a tool in a specific manner.

Is that all too measured? Want more invective? See Everything Wrong with P-Values Under One Roof (Briggs 2019).

Lucile Lu, Robert Chang and Dmitriy Ryaboy of Twitter have a practical guide to risky testing at scale: Power, minimal detectable effect, and bucket size estimation in A/B tests

Bob Sturm recommends, Bailey (2008) for discussion of hypothesis testing in terms of linear subspaces.

(side note: the proportional odds model generalises K-W/WMW. Huh.)

Everything so far has been in a frequentist framing. The entire question of what hypothesis testing is more likely to be vacuous in Bayesian settings (although Bayes model selection is a thing). See also Thomas Lumley on a Bayesian t-test which ends up being a kind of bootstrap in an interesting way. Also, actionable, see Yanir Seroussi on Making Bayesian A/B testing more accessible.

I cannot decide if tea-lang is a passive-aggressive joke or not. It is a compiler for statistical tests.

Tea is a domain specific programming language that automates statistical test selection and execution… Users provide 5 pieces of information:

  • the dataset of interest,
  • the variables in the dataset they want to analyze,
  • the study design (e.g., independent, dependent variables),
  • the assumptions they make about the data based on domain knowledge(e.g., a variable is normally distributed), and
  • a hypothesis.

Tea then “compiles” these into logical constraints to select valid statistical tests. Tests are considered valid if and only if all the assumptions they make about the data (e.g., normal distribution, equal variance between groups, etc.) hold. Tea then finally executes the valid tests.

Goodness-of-fit tests

Also a useful thing to have; the hypothesis here is kind-of more interesting, along the lines of it-is-unlikely-that-the-model-you-propose-contains-this-data.

Design of experiments

TBD

References

Bailey, R. A. 2008. Design of Comparative Experiments. 1 edition. Cambridge series on statistical and probabilistic mathematics. Cambridge; New York: Cambridge University Press.
Briggs, William M. 2019. “Everything Wrong with P-Values Under One Roof.” In Beyond Traditional Probabilistic Methods in Economics, edited by Vladik Kreinovich, Nguyen Ngoc Thach, Nguyen Duc Trung, and Dang Van Thanh. Vol. 809. Studies in Computational Intelligence. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-04200-4.
Colbourn, Charles J., and Jeffrey H. Dinitz. 2010. Handbook of Combinatorial Designs, Second Edition. CRC Press.
Efron, Bradley. 2008. “Simultaneous Inference: When Should Hypothesis Testing Problems Be Combined?” The Annals of Applied Statistics 2 (1): 197–223. https://doi.org/10.1214/07-AOAS141.
Geer, Sara van de. 2016. Estimation and Testing Under Sparsity. Vol. 2159. Lecture Notes in Mathematics. Cham: Springer International Publishing. http://link.springer.com/10.1007/978-3-319-32774-7.
Good, Phillip I., and Philip Good. 1999. Resampling Methods: A Practical Guide to Data Analysis. Birkhäuser Basel. https://doi.org/10.1007/978-1-4757-3049-4.
Greenland, Sander. 1995a. “Dose-Response and Trend Analysis in Epidemiology: Alternatives to Categorical Analysis.” Epidemiology 6 (4): 356–65. https://www.jstor.org/stable/3702080.
———. 1995b. “Problems in the Average-Risk Interpretation of Categorical Dose-Response Analyses.” Epidemiology 6 (5): 563–65. https://www.jstor.org/stable/3702134.
Kohavi, Ron, Roger Longbotham, Dan Sommerfield, and Randal M. Henne. 2009. “Controlled Experiments on the Web: Survey and Practical Guide.” Data Mining and Knowledge Discovery 18 (1): 140–81. https://doi.org/10.1007/s10618-008-0114-1.
Korattikara, Anoop, Yutian Chen, and Max Welling. 2015. “Sequential Tests for Large-Scale Learning.” Neural Computation 28 (1): 45–70. https://doi.org/10.1162/NECO_a_00796.
Kreinovich, Vladik, Nguyen Ngoc Thach, Nguyen Duc Trung, and Dang Van Thanh, eds. 2019. Beyond Traditional Probabilistic Methods in Economics. Vol. 809. Studies in Computational Intelligence. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-04200-4.
Lavergne, Pascal, Samuel Maistre, and Valentin Patilea. 2015. “A significance test for covariates in nonparametric regression.” Electronic Journal of Statistics 9: 643–78. https://doi.org/10.1214/15-EJS1005.
Lehmann, Erich L., and Joseph P. Romano. 2010. Testing statistical hypotheses. 3. ed. Springer texts in statistics. New York, NY: Springer.
Lumley, Thomas, Paula Diehr, Scott Emerson, and Lu Chen. 2002. “The Importance of the Normality Assumption in Large Public Health Data Sets.” Annual Review of Public Health 23 (1): 151–69. https://doi.org/10.1146/annurev.publhealth.23.100901.140546.
Maesono, Yoshihiko, Taku Moriyama, and Mengxin Lu. 2016. “Smoothed Nonparametric Tests and Their Properties.” arXiv:1610.02145 [math, Stat], October. http://arxiv.org/abs/1610.02145.
Malevergne, Yannick, and Didier Sornette. 2003. “Testing the Gaussian Copula Hypothesis for Financial Assets Dependences.” Quantitative Finance 3 (4): 231–50. https://doi.org/10.1088/1469-7688/3/4/301.
McShane, Blakeley B., David Gal, Andrew Gelman, Christian Robert, and Jennifer L. Tackett. 2019. “Abandon Statistical Significance.” The American Statistician 73 (sup1): 235–45. https://doi.org/10.1080/00031305.2018.1527253.
Na, Seongryong. 2009. “Goodness-of-Fit Test Using Residuals in Infinite-Order Autoregressive Models.” Journal of the Korean Statistical Society 38 (3): 287–95. https://doi.org/10.1016/j.jkss.2008.12.002.
Ormerod, John T., Michael Stewart, Weichang Yu, and Sarah E. Romanes. 2017. “Bayesian Hypothesis Tests with Diffuse Priors: Can We Have Our Cake and Eat It Too?” arXiv:1710.09146 [math, Stat], October. http://arxiv.org/abs/1710.09146.
Paparoditis, Efstathios, and Theofanis Sapatinas. 2014. “Bootstrap-Based Testing for Functional Data.” arXiv:1409.4317 [math, Stat], September. http://arxiv.org/abs/1409.4317.
Sejdinovic, Dino, Bharath Sriperumbudur, Arthur Gretton, and Kenji Fukumizu. 2012. “Equivalence of distance-based and RKHS-based statistics in hypothesis testing.” The Annals of Statistics 41 (5): 2263–91. https://doi.org/10.1214/13-AOS1140.
Tang, Minh, Avanti Athreya, Daniel L. Sussman, Vince Lyzinski, and Carey E. Priebe. 2014. “A Nonparametric Two-Sample Hypothesis Testing Problem for Random Dot Product Graphs.” arXiv:1409.2344 [math, Stat], September. http://arxiv.org/abs/1409.2344.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.