Gradient descent, first-order, stochastic

a.k.a. SGD

January 30, 2020 — May 19, 2023

functional analysis
neural nets
optimization
SDEs
stochastic processes

Stochastic optimization uses noisy (possibly approximate) 1st-order gradient information to find the argument which minimises

Figure 1

\[ x^*=\operatorname{arg min}_{\mathbf{x}} f(x) \]

for some objective function \(f:\mathbb{R}^n\to\mathbb{R}\).

That this works with little fuss in very high dimensions is a major pillar of deep learning.

1 Basic

The original version, in terms of root finding, is (Herbert Robbins and Monro 1951) who later generalised analysis in (H. Robbins and Siegmund 1971), using martingale arguments to analyse convergence. There is some historical context in (Lai 2003) which puts it all in context. That article was written before the current craze for SGD in deep learning; after 2013 or so the problem is rather that there is so much information on the method that the challenge becomes sifting out the AI hype from the useful.

I recommend Francis Bach’s Sum of geometric series trick as an introduction to showing advanced things about SGD using elementary tools.

Francesco Orabona on how to prove SGD converges:

to balance the universe of first-order methods, I decided to show how to easily prove the convergence of the iterates in SGD, even in unbounded domains.

2 Gradient flows

Gradient flows are a continuous-limit SDE modelling stochastic gradient. See gradient flows.

3 Variance-reduced

🏗

Zeyuan Allen-Zhu: Faster Than SGD 1: Variance Reduction:

SGD is well-known for large-scale optimization. In my mind, there are two (and only two) fundamental improvements since the original introduction of SGD: (1) variance reduction, and (2) acceleration. In this post I’d love to conduct a survey regarding (1),

4 Stochastic momentum

5 Polyak averaging

6 Normalized

Zhiyuan Li and Sanjeev Arora argue:

You may remember our previous blog post showing that it is possible to do state-of-the-art deep learning with learning rate that increases exponentially during training. It was meant to be a dramatic illustration that what we learned in optimization classes and books isn’t always a good fit for modern deep learning, specifically, normalized nets, which is our term for nets that use any one of popular normalization schemes, e.g. BatchNorm (BN), GroupNorm (GN), WeightNorm (WN). Today’s post (based upon our paper with Kaifeng Lyu at NeurIPS20) identifies other surprising incompatibilities between normalized nets and traditional analyses.

7 In MCMC

See SGMCMC.

8 Adaptive

What we do in practice in neural nets is adaptive tuning of GD rates. See Adaptive SGD.

9 Incoming

Mini-batch and stochastic methods for minimising loss when you have a lot of data, or a lot of parameters, and using it all at once is silly, or when you want to iteratively improve your solution as data comes in, and you have access to a gradient for your loss, ideally automatically calculated. It’s not clear at all that it should work, except by collating all your data and optimising offline, except that much of modern machine learning shows that it does.

Sometimes this apparently stupid trick might even be fast for small-dimensional cases, so you may as well try.

Technically, “online” optimisation in bandit/RL problems might imply that you have to “minimise regret online”, which has a slightly different meaning and, e.g. involves seeing each training only as it arrives along some notional arrow of time, yet wishing to make the “best” decision at the next time, and possibly choosing your next experiment in order to trade-off exploration versus exploitation etc.

In SGD you can see your data as often as you want and in whatever order, but you only look at a bit at a time. Usually the data is given and predictions make no difference to what information is available to you.

Some of the same technology pops up in each of these notions of online optimisation, but I am mostly thinking about SGD here.

There are many more permutations and variations used in practice.

10 References

Ahn, Korattikara, and Welling. 2012. Bayesian Posterior Sampling via Stochastic Gradient Fisher Scoring.” In Proceedings of the 29th International Coference on International Conference on Machine Learning. ICML’12.
Alexos, Boyd, and Mandt. 2022. Structured Stochastic Gradient MCMC.” In Proceedings of the 39th International Conference on Machine Learning.
Arya, Schauer, Schäfer, et al. 2022. Automatic Differentiation of Programs with Discrete Randomness.” In.
Bach, Francis, and Moulines. 2011. Non-Asymptotic Analysis of Stochastic Approximation Algorithms for Machine Learning.” In Advances in Neural Information Processing Systems (NIPS).
Bach, Francis R., and Moulines. 2013. Non-Strongly-Convex Smooth Stochastic Approximation with Convergence Rate O(1/n).” In arXiv:1306.2119 [Cs, Math, Stat].
Benaïm. 1999. Dynamics of Stochastic Approximation Algorithms.” In Séminaire de Probabilités de Strasbourg. Lecture Notes in Math.
Bensoussan, Li, Nguyen, et al. 2020. Machine Learning and Control Theory.” arXiv:2006.05604 [Cs, Math, Stat].
Botev, and Lloyd. 2015. Importance Accelerated Robbins-Monro Recursion with Applications to Parametric Confidence Limits.” Electronic Journal of Statistics.
Bottou. 1991. Stochastic Gradient Learning in Neural Networks.” In Proceedings of Neuro-Nîmes 91.
———. 1998. Online Algorithms and Stochastic Approximations.” In Online Learning and Neural Networks.
———. 2010. Large-Scale Machine Learning with Stochastic Gradient Descent.” In Proceedings of the 19th International Conference on Computational Statistics (COMPSTAT’2010).
Bottou, and Bousquet. 2008. The Tradeoffs of Large Scale Learning.” In Advances in Neural Information Processing Systems.
Bottou, Curtis, and Nocedal. 2016. Optimization Methods for Large-Scale Machine Learning.” arXiv:1606.04838 [Cs, Math, Stat].
Bottou, and LeCun. 2004. Large Scale Online Learning.” In Advances in Neural Information Processing Systems 16.
Bubeck. 2015. Convex Optimization: Algorithms and Complexity. Foundations and Trends in Machine Learning.
Cevher, Becker, and Schmidt. 2014. Convex Optimization for Big Data.” IEEE Signal Processing Magazine.
Chaudhari, Choromanska, Soatto, et al. 2017. Entropy-SGD: Biasing Gradient Descent Into Wide Valleys.”
Chen, Xiaojun. 2012. Smoothing Methods for Nonsmooth, Nonconvex Minimization.” Mathematical Programming.
Chen, Tianqi, Fox, and Guestrin. 2014. Stochastic Gradient Hamiltonian Monte Carlo.” In Proceedings of the 31st International Conference on Machine Learning.
Chen, Zaiwei, Mou, and Maguluri. 2021. Stationary Behavior of Constant Stepsize SGD Type Algorithms: An Asymptotic Characterization.”
Di Giovanni, Rowbottom, Chamberlain, et al. 2022. Graph Neural Networks as Gradient Flows.”
Domingos. 2020. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine.” arXiv:2012.00152 [Cs, Stat].
Duchi, Hazan, and Singer. 2011. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization.” Journal of Machine Learning Research.
Feng, and Tu. 2021. The Inverse Variance–Flatness Relation in Stochastic Gradient Descent Is Critical for Finding Flat Minima.” Proceedings of the National Academy of Sciences.
Friedlander, and Schmidt. 2012. Hybrid Deterministic-Stochastic Methods for Data Fitting.” SIAM Journal on Scientific Computing.
Ghadimi, and Lan. 2013a. Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming.” SIAM Journal on Optimization.
———. 2013b. Accelerated Gradient Methods for Nonconvex Nonlinear and Stochastic Programming.” arXiv:1310.3787 [Math].
Goh. 2017. Why Momentum Really Works.” Distill.
Hazan, Levy, and Shalev-Shwartz. 2015. Beyond Convexity: Stochastic Quasi-Convex Optimization.” In Advances in Neural Information Processing Systems 28.
Heyde. 1974. On Martingale Limit Theory and Strong Convergence Results for Stochastic Approximation Procedures.” Stochastic Processes and Their Applications.
Hu, Pan, and Kwok. 2009. Accelerated Gradient Methods for Stochastic Optimization and Online Learning.” In Advances in Neural Information Processing Systems.
Jakovetic, Freitas Xavier, and Moura. 2014. Convergence Rates of Distributed Nesterov-Like Gradient Methods on Random Networks.” IEEE Transactions on Signal Processing.
Kidambi, Netrapalli, Jain, et al. 2023. On the Insufficiency of Existing Momentum Schemes for Stochastic Optimization.” In.
Kingma, and Ba. 2015. Adam: A Method for Stochastic Optimization.” Proceeding of ICLR.
Lai. 2003. Stochastic Approximation.” The Annals of Statistics.
Lee, Panageas, Piliouras, et al. 2017. First-Order Methods Almost Always Avoid Saddle Points.” arXiv:1710.07406 [Cs, Math, Stat].
Lee, Simchowitz, Jordan, et al. 2016. Gradient Descent Converges to Minimizers.” arXiv:1602.04915 [Cs, Math, Stat].
Liu, and Wang. 2019. Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm.” In Advances In Neural Information Processing Systems.
Ljung, Pflug, and Walk. 1992. Stochastic Approximation and Optimization of Random Systems.
Ma, and Belkin. 2017. Diving into the Shallows: A Computational Perspective on Large-Scale Shallow Learning.” arXiv:1703.10622 [Cs, Stat].
Maclaurin, Duvenaud, and Adams. 2015. Early Stopping as Nonparametric Variational Inference.” In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics.
Mairal. 2013. Stochastic Majorization-Minimization Algorithms for Large-Scale Optimization.” In Advances in Neural Information Processing Systems.
Mandt, Hoffman, and Blei. 2017. Stochastic Gradient Descent as Approximate Bayesian Inference.” JMLR.
McMahan, Holt, Sculley, et al. 2013. Ad Click Prediction: A View from the Trenches.” In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. KDD ’13.
Mitliagkas, Zhang, Hadjis, et al. 2016. Asynchrony Begets Momentum, with an Application to Deep Learning.” arXiv:1605.09774 [Cs, Math, Stat].
Neu, Dziugaite, Haghifam, et al. 2021. Information-Theoretic Generalization Bounds for Stochastic Gradient Descent.” arXiv:2102.00931 [Cs, Stat].
Nguyen, Liu, Scheinberg, et al. 2017. Stochastic Recursive Gradient Algorithm for Nonconvex Optimization.” arXiv:1705.07261 [Cs, Math, Stat].
Patel. 2017. On SGD’s Failure in Practice: Characterizing and Overcoming Stalling.” arXiv:1702.00317 [Cs, Math, Stat].
Polyak, and Juditsky. 1992. Acceleration of Stochastic Approximation by Averaging.” SIAM Journal on Control and Optimization.
Reddi, Hefny, Sra, et al. 2016. Stochastic Variance Reduction for Nonconvex Optimization.” In PMLR.
Robbins, Herbert, and Monro. 1951. A Stochastic Approximation Method.” The Annals of Mathematical Statistics.
Robbins, H., and Siegmund. 1971. A Convergence Theorem for Non Negative Almost Supermartingales and Some Applications.” In Optimizing Methods in Statistics.
Ruder. 2017. An Overview of Gradient Descent Optimization Algorithms.”
Sagun, Guney, Arous, et al. 2014. Explorations on High Dimensional Landscapes.” arXiv:1412.6615 [Cs, Stat].
Salimans, and Kingma. 2016. Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks.” In Advances in Neural Information Processing Systems 29.
Shalev-Shwartz, and Tewari. 2011. Stochastic Methods for L1-Regularized Loss Minimization.” Journal of Machine Learning Research.
Şimşekli, Sener, Deligiannidis, et al. 2020. Hausdorff Dimension, Stochastic Differential Equations, and Generalization in Neural Networks.” CoRR.
Smith, Dherin, Barrett, et al. 2020. On the Origin of Implicit Regularization in Stochastic Gradient Descent.” In.
Spall. 2000. Adaptive Stochastic Approximation by the Simultaneous Perturbation Method.” IEEE Transactions on Automatic Control.
Sun, Yang, Xun, et al. 2023. Scheduling Hyperparameters to Improve Generalization: From Centralized SGD to Asynchronous SGD.” ACM Transactions on Knowledge Discovery from Data.
Vishwanathan, Schraudolph, Schmidt, et al. 2006. “Accelerated Training of Conditional Random Fields with Stochastic Gradient Methods.” In Proceedings of the 23rd International Conference on Machine Learning.
Welling, and Teh. 2011. Bayesian Learning via Stochastic Gradient Langevin Dynamics.” In Proceedings of the 28th International Conference on International Conference on Machine Learning. ICML’11.
Wright, and Recht. 2021. Optimization for Data Analysis.
Xu. 2011. Towards Optimal One Pass Large Scale Learning with Averaged Stochastic Gradient Descent.” arXiv:1107.2490 [Cs].
Zhang, Jian, and Mitliagkas. 2018. YellowFin and the Art of Momentum Tuning.”
Zhang, Xiao, Wang, and Gu. 2017. Stochastic Variance-Reduced Gradient Descent for Low-Rank Matrix Recovery from Linear Measurements.” arXiv:1701.00481 [Stat].
Zinkevich, Weimer, Li, et al. 2010. Parallelized Stochastic Gradient Descent.” In Advances in Neural Information Processing Systems 23.