Gradient descent, first-order, stochastic
a.k.a. SGD
January 30, 2020 — May 19, 2023
Stochastic optimization uses noisy (possibly approximate) 1st-order gradient information to find the argument which minimises
\[ x^*=\operatorname{arg min}_{\mathbf{x}} f(x) \]
for some objective function \(f:\mathbb{R}^n\to\mathbb{R}\).
That this works with little fuss in very high dimensions is a major pillar of deep learning.
1 Basic
The original version, in terms of root finding, is (Herbert Robbins and Monro 1951) who later generalised analysis in (H. Robbins and Siegmund 1971), using martingale arguments to analyse convergence. There is some historical context in (Lai 2003) which puts it all in context. That article was written before the current craze for SGD in deep learning; after 2013 or so the problem is rather that there is so much information on the method that the challenge becomes sifting out the AI hype from the useful.
I recommend Francis Bach’s Sum of geometric series trick as an introduction to showing advanced things about SGD using elementary tools.
Francesco Orabona on how to prove SGD converges:
to balance the universe of first-order methods, I decided to show how to easily prove the convergence of the iterates in SGD, even in unbounded domains.
2 Gradient flows
Gradient flows are a continuous-limit SDE modelling stochastic gradient. See gradient flows.
3 Variance-reduced
🏗
Zeyuan Allen-Zhu: Faster Than SGD 1: Variance Reduction:
SGD is well-known for large-scale optimization. In my mind, there are two (and only two) fundamental improvements since the original introduction of SGD: (1) variance reduction, and (2) acceleration. In this post I’d love to conduct a survey regarding (1),
4 Stochastic momentum
5 Polyak averaging
6 Normalized
Zhiyuan Li and Sanjeev Arora argue:
You may remember our previous blog post showing that it is possible to do state-of-the-art deep learning with learning rate that increases exponentially during training. It was meant to be a dramatic illustration that what we learned in optimization classes and books isn’t always a good fit for modern deep learning, specifically, normalized nets, which is our term for nets that use any one of popular normalization schemes, e.g. BatchNorm (BN), GroupNorm (GN), WeightNorm (WN). Today’s post (based upon our paper with Kaifeng Lyu at NeurIPS20) identifies other surprising incompatibilities between normalized nets and traditional analyses.
7 In MCMC
See SGMCMC.
8 Adaptive
What we do in practice in neural nets is adaptive tuning of GD rates. See Adaptive SGD.
9 Incoming
Mini-batch and stochastic methods for minimising loss when you have a lot of data, or a lot of parameters, and using it all at once is silly, or when you want to iteratively improve your solution as data comes in, and you have access to a gradient for your loss, ideally automatically calculated. It’s not clear at all that it should work, except by collating all your data and optimising offline, except that much of modern machine learning shows that it does.
Sometimes this apparently stupid trick might even be fast for small-dimensional cases, so you may as well try.
Technically, “online” optimisation in bandit/RL problems might imply that you have to “minimise regret online”, which has a slightly different meaning and, e.g. involves seeing each training only as it arrives along some notional arrow of time, yet wishing to make the “best” decision at the next time, and possibly choosing your next experiment in order to trade-off exploration versus exploitation etc.
In SGD you can see your data as often as you want and in whatever order, but you only look at a bit at a time. Usually the data is given and predictions make no difference to what information is available to you.
Some of the same technology pops up in each of these notions of online optimisation, but I am mostly thinking about SGD here.
There are many more permutations and variations used in practice.