Statistics, computational complexity thereof
2020-11-03 — 2021-03-09
Wherein statistical inference is examined under fixed computation and memory budgets, the impact on Gaussian process methods is illustrated by O(N^3) scaling constraints, and limits on model expressivity given step and storage constraints are considered.
Statistical inference when computation isn’t free and memory is not infinite; what does this tell us about the learnable?
Lots of statistics is already concerned with complexity scaling; in Gaussian process regression, for example, we care a lot about avoiding naïve methods that scale as \(\mathcal{O}(N^3)\). In optimisation we care greatly about the trade-off between approach to the optimum and number of steps.
There is a converse question which is how good a model can you fit, holding a computational/memory budget fixed? General results of this flavour are what I might collate here.
For now, there are some fun ones about large-models.