Statistics, computational complexity thereof

November 3, 2020 — March 10, 2021

machine learning
statistics

Statistical inference when computation isn’t free and memory is not infinite; what does this tell us about the learnable?

Figure 1

Lots of statistics is already concerned with complexity scaling already; in Gaussian process regression, for example, we care a lot about avoiding naïve methods that scale as \(\mathcal{O}(N^3)\). In optimisation we care greatly about the trade off between approach to the optimum and number of steps.

There is a converse question which is how good a model can you fit, holding a computational/memory budget fixed? General results of this flavour are what I might collate here.

For now there are some fun ones about large-models.

1 References

Bossaerts, Yadav, and Murawski. 2019. Uncertainty and Computational Complexity.” Philosophical Transactions of the Royal Society B: Biological Sciences.
Frey, and Jojic. 2005. A Comparison of Algorithms for Inference and Learning in Probabilistic Graphical Models.” IEEE Transactions on Pattern Analysis and Machine Intelligence.
Jordan, and Weiss. 2002. Probabilistic Inference in Graphical Models.” Handbook of Neural Networks and Brain Theory.
Moshkovitz, and Moshkovitz. 2017. Mixing Implies Lower Bounds for Space Bounded Learning.” In Conference on Learning Theory.