An online learning perspective gives bounds on the regret: the gap in performance between online estimation and the optimal estimator when we have access to the entire data.
A lot of things are sort of online learning; stochastic gradient descent, for example, is closely related. However, if you meet someone who claims to study “online learning” they usually mean to emphasize particular things. Frequently seen in the context of bandit problems; connection TBD.
Hazan’s Introduction to online convex optimization looks fresh.
Follow-the-regularized leader
TBD
Covariance
Learning covariance online is a much more basic application than the other fancy things considered here, but I guess it still fits. John D Cook:
This better way of computing variance goes back to a 1962 paper by B. P. Welford and is presented in Donald Knuth’s Art of Computer Programming, Vol 2, page 232, 3rd edition. […]
- Initialize and
- For subsequent s, use the recurrence formulas
- For , the th estimate of the variance is
References
Abernethy, Bartlett, and Hazan. 2011. “Blackwell Approachability and No-Regret Learning Are Equivalent.” In.
Cesa-Bianchi, and Orabona. 2021.
“Online Learning Algorithms.” Annual Review of Statistics and Its Application.
Feng, Xu, and Mannor. 2017.
“Outlier Robust Online Learning.” arXiv:1701.00251 [Cs, Stat].
Igel, Suttorp, and Hansen. 2006.
“A Computational Efficient Covariance Matrix Update and a (1+1)-CMA for Evolution Strategies.” In
Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation - GECCO ’06.
Orabona, Pal, Com, et al. n.d. “Open Problem: Parameter-Free and Scale-Free Online Algorithms.”
Vervoort. 1996.
“Blackwell Games.” In
Statistics, Probability and Game Theory: Papers in Honor of David Blackwell.
Zarezade, Upadhyay, Rabiee, et al. 2017.
“RedQueen: An Online Algorithm for Smart Broadcasting in Social Networks.” In
Proceedings of the Tenth ACM International Conference on Web Search and Data Mining. WSDM ’17.
Zinkevich. 2003.
“Online Convex Programming and Generalized Infinitesimal Gradient Ascent.” In
Proceedings of the Twentieth International Conference on International Conference on Machine Learning. ICML’03.