This page mostly exists to collect a good selection of overview statistics introductions that are not terrible. I’m especially interested in modern fusion methods that harmonise what we would call statistics and machine learning methods, and the unnecessary terminological confusion between those systems.
Here are some recommended courses to get started if you don’t know what you’re doing.
- Larry Wasserman’s stats course
- Shalizi’s regression lectures
- Moritz Hardt, Benjamin Recht Patterns, predictions, and actions: A story about machine learning
See also the recommended texts below. May I draw your attention especially to Kroese et al. (2019), which I proof-read for my supervisor Zdravko Botev, and enjoyed greatly? It smoothly bridges non-statistics mathematicians into applied statistics, without being excruciating, unlike layperson introductions. It is now freely available online and has fewer typos.
There are also statistics podcasts.
Boaz Barak, ML Theory with bad drawings attempts one division of labour here:
However, what we actually do is at least thrice-removed from this ideal:
- The model gap: We do not optimize over all possible systems, but rather a small subset of such systems (e.g., ones that belong to a certain family of models).
- The metric gap: In almost all cases, we do not optimize the actual measure of success we care about, but rather another metric that is at best correlated with it.
- The algorithm gap: We don’t even optimize the latter metric since it will almost always be non-convex, and hence the system we end up with depends on our starting point and the particular algorithms we use.
The magic of machine learning is that sometimes (though not always!) we can still get good results despite these gaps. Much of the theory of machine learning is about understanding under what conditions can we bridge some of these gaps.
The above discussion explains the “machine Learning is just X” takes. The expressivity of our models falls under approximation theory. The gap between the success we want to achieve and the metric we can measure often corresponds to the difference between population and sample performance, which becomes a question of statistics. The study of our algorithms' performance falls under optimization.