Online learning

September 30, 2018 — October 20, 2022

Bayes
dynamical systems
linear algebra
optimization
probability
signal processing
statistics
time series

An online learning perspective gives bounds on the regret: the gap between in performance between online estimation and the optimal estimator when we have access to the entire data.

A lot of things are sort-of online learning; stochastic gradient descent, for example, is closely related. However, if you meet someone who claims to study “online learning” they usually mean to emphasis particular things. Frequently seen in the context of bandit problems; connection TBD.

Hazan’s Introduction to online convex optimization looks fresh.

1 Follow-the-regularized leader

TBD

2 Parameter-free

Francesco Orabana’s tutorial.

3 Covariance

Learning covariance online is a much more basic application than the other fancy things considered here, but I guess it still fits. John D Cook:

This better way of computing variance goes back to a 1962 paper by B. P. Welford and is presented in Donald Knuth’s Art of Computer Programming, Vol 2, page 232, 3rd edition. […]

  • Initialize \(M_1 = x_1\) and \(S_1 = 0.\)
  • For subsequent \(x\)s, use the recurrence formulas \[M_k = M_{k-1} + (x_k — M_{k-1})/k\] \[S_k = S_{k-1} + (x_k — M_{k-1})(x_k — M_k).\]
  • For \(2 \leq k \leq n\), the \(k\)th estimate of the variance is \[s_k^2 = S_k/(k — 1).\]

4 Anomaly detection

See also anomaly detection.

5 References

Abernethy, Bartlett, and Hazan. 2011. “Blackwell Approachability and No-Regret Learning Are Equivalent.” In.
Allen-Zhu, Li, Singh, et al. 2017. Near-Optimal Design of Experiments via Regret Minimization.” In PMLR.
Arora, Ge, Liang, et al. 2017. Generalization and Equilibrium in Generative Adversarial Nets (GANs).” arXiv:1703.00573 [Cs].
Cesa-Bianchi, and Orabona. 2021. Online Learning Algorithms.” Annual Review of Statistics and Its Application.
Chan, Golub, and Leveque. 1983. Algorithms for Computing the Sample Variance: Analysis and Recommendations.” The American Statistician.
Dasgupta, and Hsu. 2007. On-Line Estimation with the Multivariate Gaussian Distribution.” In Learning Theory.
Feng, Xu, and Mannor. 2017. Outlier Robust Online Learning.” arXiv:1701.00251 [Cs, Stat].
Igel, Suttorp, and Hansen. 2006. A Computational Efficient Covariance Matrix Update and a (1+1)-CMA for Evolution Strategies.” In Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation - GECCO ’06.
Koppel, Warnell, Stump, et al. 2016. Parsimonious Online Learning with Kernels via Sparse Projections in Function Space.” arXiv:1612.04111 [Cs, Stat].
Ling. 1974. Comparison of Several Algorithms for Computing Sample Means and Variances.” Journal of the American Statistical Association.
Lyu, Needell, and Balzano. 2020. Online Matrix Factorization for Markovian Data and Applications to Network Dictionary Learning.” Journal of Machine Learning Research.
McGee, Kosterlitz, Kaznatcheev, et al. 2022. The Cost of Information Acquisition by Natural Selection.”
Orabona, Pal, Com, et al. n.d. “Open Problem: Parameter-Free and Scale-Free Online Algorithms.”
Vervoort. 1996. Blackwell Games.” In Statistics, Probability and Game Theory: Papers in Honor of David Blackwell.
Xu, and Zeevi. 2023. Bayesian Design Principles for Frequentist Sequential Learning.”
Zarezade, Upadhyay, Rabiee, et al. 2017. RedQueen: An Online Algorithm for Smart Broadcasting in Social Networks.” In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining. WSDM ’17.
Zinkevich. 2003. Online Convex Programming and Generalized Infinitesimal Gradient Ascent.” In Proceedings of the Twentieth International Conference on International Conference on Machine Learning. ICML’03.