Statistics and machine learning



Those who ignore statistics are condemned to reinvent it

Bradley Efron, according to Kareem Carr

A methodological distinction that some people make: What’s the difference between analytics and statistics?

  • Analytics helps you form hypotheses. It improves the quality of your questions.
  • Statistics helps you test hypotheses. It improves the quality of your answers.

I would divide these into exploratory and descriptive statistics, but that terminology is not universal.

It is best not to need statistics at all. If a pattern is so clear it is undeniable, then we can go home early. Although — is our bar for undeniable high enough? Have we really eliminated our own biases and wishful thinking? How would we know?

OK, statistics also plays some other roles, like giving us greater accuracy in our predictions, but that doesn’t fit into the aphorism so nicely.

What else to say here? I am not sure. I created this page before I became a professional statistician, and statistics grew to be half this web site. For more information on statistics, see… pretty much any page.

Role in science

Statistics is an Excellent Servant and a Bad Master:

This means that Galileo, Newton, Kepler, Hooke, Pasteur, Mendel, Lavoisier, Maxwell, von Helmholtz, Mendeleev, etc. did their work without anything that resembled modern statistics, and that Einstein, Curie, Fermi, Bohr, Heisenberg, etc. etc. did their work in an age when statistics was still extremely rudimentary. We don’t need statistics to do good research.

Indeed we do not. What we need statistics for is to ensure that marginally viable research is not 💩 research.

Exploratory data analysis

See exploratory data analysis.

Unifying statistics and ML

I’m especially interested in modern fusion methods that harmonise what we would call statistics and machine learning methods, and the unnecessary terminological confusion between those systems. But I have nothing to say about that right now.

Decisions

TODO: Introduce decision theory.

Tests

TODO: Introduce tests.

Taxonomies

Boaz Barak, ML Theory with bad drawings attempts one division of labour:

However, what we actually do is at least thrice-removed from this ideal:

  1. The model gap: We do not optimize over all possible systems, but rather a small subset of such systems (e.g., ones that belong to a certain family of models).
  2. The metric gap: In almost all cases, we do not optimize the actual measure of success we care about, but rather another metric that is at best correlated with it.
  3. The algorithm gap: We don’t even optimize the latter metric since it will almost always be non-convex, and hence the system we end up with depends on our starting point and the particular algorithms we use.

The magic of machine learning is that sometimes (though not always!) we can still get good results despite these gaps. Much of the theory of machine learning is about understanding under what conditions can we bridge some of these gaps.

The above discussion explains the “machine Learning is just X” takes. The expressivity of our models falls under approximation theory. The gap between the success we want to achieve and the metric we can measure often corresponds to the difference between population and sample performance, which becomes a question of statistics. The study of our algorithms’performance falls under optimization.

Textbooks, resources

References

Aggarwal, Charu C. 2015. Data Mining. 1st edition. Cham: Springer International Publishing.
Cox, D. R., and D. V. Hinkley. 2000. Theoretical Statistics. Boca Raton: Chapman & Hall/CRC.
Dadkhah, Kamran. 2011. Foundations of Mathematical and Computational Economics.
Devroye, Luc, László Györfi, and Gábor Lugosi. 1996. A Probabilistic Theory of Pattern Recognition. New York: Springer.
Efron, Bradley, and Trevor Hastie. 2016. Computer Age Statistical Inference: Algorithms, Evidence, and Data Science. Institute of Mathematical Statistics Monographs. New York, NY: Cambridge University Press.
Freedman, David A., and Philip B Stark. 2009. What Is the Chance of an Earthquake? In Statistical Models and Causal Inference: A Dialogue with the Social Sciences, edited by David Collier, Jasjeet S. Sekhon, and Philip B. Stark. Cambridge: Cambridge University Press.
Gelman, Andrew, John B. Carlin, Hal S. Stern, David B. Dunson, Aki Vehtari, and Donald B. Rubin. 2013. Bayesian Data Analysis. 3 edition. Chapman & Hall/CRC texts in statistical science. Boca Raton: Chapman and Hall/CRC.
Greenland, Sander. 1995a. Dose-Response and Trend Analysis in Epidemiology: Alternatives to Categorical Analysis.” Epidemiology 6 (4): 356–65.
———. 1995b. Problems in the Average-Risk Interpretation of Categorical Dose-Response Analyses.” Epidemiology 6 (5): 563–65.
Guttman, Louis. 1977. What Is Not What in Statistics.” Journal of the Royal Statistical Society. Series D (The Statistician) 26 (2): 81–107.
Guttorp, Peter. 1995. Stochastic modeling of scientific data. 1. ed. Stochastic modeling series. London: Chapman & Hall.
Hardt, Moritz, and Benjamin Recht. 2021. Patterns, Predictions, and Actions: A Story about Machine Learning.” arXiv:2102.05242 [Cs, Stat], February.
Hastie, Trevor, Robert Tibshirani, and Jerome Friedman. 2009. The Elements of Statistical Learning: Data Mining, Inference and Prediction. Springer.
Kobayashi, Hisashi, Brian L. Mark, and William Turin. 2011. Probability, Random Processes, and Statistical Analysis: Applications to Communications, Signal Processing, Queueing Theory and Mathematical Finance. Cambridge University Press.
Kroese, Dirk P., Zdravko I. Botev, Thomas Taimre, and Radislav Vaisman. 2019. Mathematical and Statistical Methods for Data Science and Machine Learning. First edition. Chapman & Hall/CRC Machine Learning & Pattern Recognition. Boca Raton: CRC Press.
Lehmann, E. L., and George Casella. 1998. Theory of point estimation. 2nd ed. Springer texts in statistics. New York: Springer.
Lehmann, Erich L., and Joseph P. Romano. 2010. Testing statistical hypotheses. 3. ed. Springer texts in statistics. New York, NY: Springer.
Lumley, Thomas, Paula Diehr, Scott Emerson, and Lu Chen. 2002. The Importance of the Normality Assumption in Large Public Health Data Sets.” Annual Review of Public Health 23 (1): 151–69.
Mohri, Mehryar, Afshin Rostamizadeh, and Ameet Talwalkar. 2018. Foundations of Machine Learning. Second edition. Adaptive Computation and Machine Learning. Cambridge, Massachusetts: The MIT Press.
Murphy, Kevin P. 2012. Machine learning: a probabilistic perspective. 1 edition. Adaptive computation and machine learning series. Cambridge, MA: MIT Press.
———. 2022. Probabilistic Machine Learning: An Introduction. Adaptive Computation and Machine Learning Series. Cambridge, Massachusetts: The MIT Press.
———. 2023. Probabilistic Machine Learning: Advanced Topics. MIT Press.
Robert, Christian P., and George Casella. 2004. Monte Carlo Statistical Methods. 2nd ed. Springer Texts in Statistics. New York: Springer.
Schervish, Mark J. 2012. Theory of Statistics. Springer Series in Statistics. New York, NY: Springer Science & Business Media.
Soch, Joram, The Book Of Statistical Proofs, Thomas J. Faulkenberry, Kenneth Petrykowski, and Carsten Allefeld. 2020. StatProofBook/StatProofBook.github.io: StatProofBook 2020.” Zenodo.
Vaart, Aad W. van der. 2007. Asymptotic statistics. 1. paperback ed., 8. printing. Cambridge series in statistical and probabilistic mathematics. Cambridge: Cambridge Univ. Press.
Wasserman, Larry. 2013. All of Statistics: A Concise Course in Statistical Inference. Springer.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.