## a.k.a. SGD, as seen in deep learning

Stochastic optimization, uses noisy (possibly approximate) 1st-order gradient information to find the argument which minimises

$x^*=\operatorname{arg min}_{\mathbf{x}} f(x)$

for some an objective function $$f:\mathbb{R}^n\to\mathbb{R}$$.

That this works with little fuss in very high dimensions is a major pillar of deep learning.

## Basic

The original version, in terms of root finding, is who later generalised analysis in , using martingale arguments to analyze convergence. There is some historical context in (Lai 2003) which puts it all in context. That article was written before the current craze for SGD in deep learning; after 2013 or so the problem is rather that there is so much information on the method that the challenge becomes sifting out the AI hype from the useful.

I recommend Francis Bach’s Sum of geometric series trick as an introduction to showing things advanced things about SGD using elementary tools.

Francesco Orabana on how to prove SGD converges:

to balance the universe of first-order methods, I decided to show how to easily prove the convergence of the iterates in SGD, even in unbounded domains.

## Variance-reduced

🏗

Zeyuan Allen-Zhu : Faster Than SGD 1: Variance Reduction:

SGD is well-known for large-scale optimization. In my mind, there are two (and only two) fundamental improvements since the original introduction of SGD: (1) variance reduction, and (2) acceleration. In this post I’d love to conduct a survey regarding (1),

## Normalized

You may remember our previous blog post showing that it is possible to do state-of-the-art deep learning with learning rate that increases exponentially during training. It was meant to be a dramatic illustration that what we learned in optimization classes and books isn’t always a good fit for modern deep learning, specifically, normalized nets, which is our term for nets that use any one of popular normalization schemes,e.g. BatchNorm (BN), GroupNorm (GN), WeightNorm (WN). Today’s post (based upon our paper with Kaifeng Lyu at NeurIPS20) identifies other surprising incompatibilities between normalized nets and traditional analyses.

See SGMCMC.

## Sundry Hacks

Yellowfin an automatic SGD momentum tuner

Mini-batch and stochastic methods for minimising loss when you have a lot of data, or a lot of parameters, and using it all at once is silly, or when you want to iteratively improve your solution as data comes in, and you have access to a gradient for your loss, ideally automatically calculated. It’s not clear at all that it should work, except by collating all your data and optimising offline, except that much of modern machine learning shows that it does.

Sometimes this apparently stupid trick it might even be fast for small-dimensional cases, so you may as well try.

Technically, “online” optimisation in bandit/RL problems might imply that you have to “minimise regret online”, which has a slightly different meaning and, e.g. involves seeing each training only as it arrives along some notional arrow of time, yet wishing to make the “best” decision at the next time, and possibly choosing your next experiment in order to trade-off exploration versus exploitation etc.

In SGD you can see your data as often as you want and in whatever order, but you only look at a bit at a time. Usually the data is given and predictions make no difference to what information is available to you.

Some of the same technology pops up in each of these notions of online optimisation, but I am mostly thinking about SGD here.

There are many more permutations and variations used in practice.

## References

Ahn, Sungjin, Anoop Korattikara, and Max Welling. 2012. In Proceedings of the 29th International Coference on International Conference on Machine Learning, 1771–78. ICML’12. Madison, WI, USA: Omnipress.
Alexos, Antonios, Alex J. Boyd, and Stephan Mandt. 2022. In Proceedings of the 39th International Conference on Machine Learning, 414–34. PMLR.
Arya, Gaurav, Moritz Schauer, Frank Schäfer, and Christopher Vincent Rackauckas. 2022. In.
Bach, Francis R., and Eric Moulines. 2013. In arXiv:1306.2119 [Cs, Math, Stat], 773–81.
Bach, Francis, and Eric Moulines. 2011. In Advances in Neural Information Processing Systems (NIPS), –. Spain.
Benaïm, Michel. 1999. In Séminaire de Probabilités de Strasbourg, 33:1–68. Lecture Notes in Math. Berlin: Springer, Berlin.
Bensoussan, Alain, Yiqun Li, Dinh Phan Cao Nguyen, Minh-Binh Tran, Sheung Chi Phillip Yam, and Xiang Zhou. 2020. arXiv:2006.05604 [Cs, Math, Stat], June.
Botev, Zdravko I., and Chris J. Lloyd. 2015. Electronic Journal of Statistics 9 (2): 2058–75.
Bottou, Léon. 1991. In Proceedings of Neuro-Nîmes 91. Nimes, France: EC2.
———. 1998. In Online Learning and Neural Networks, edited by David Saad, 17:142. Cambridge, UK: Cambridge University Press.
———. 2010. In Proceedings of the 19th International Conference on Computational Statistics (COMPSTAT’2010), 177–86. Paris, France: Springer.
Bottou, Léon, and Olivier Bousquet. 2008. In Advances in Neural Information Processing Systems, edited by J.C. Platt, D. Koller, Y. Singer, and S. Roweis, 20:161–68. NIPS Foundation (http://books.nips.cc).
Bottou, Léon, Frank E. Curtis, and Jorge Nocedal. 2016. arXiv:1606.04838 [Cs, Math, Stat], June.
Bottou, Léon, and Yann LeCun. 2004. In Advances in Neural Information Processing Systems 16, edited by Sebastian Thrun, Lawrence Saul, and Bernhard Schölkopf. Cambridge, MA: MIT Press.
Bubeck, Sébastien. 2015. Convex Optimization: Algorithms and Complexity. Vol. 8. Foundations and Trends in Machine Learning. Now Publishers.
Cevher, Volkan, Stephen Becker, and Mark Schmidt. 2014. IEEE Signal Processing Magazine 31 (5): 32–43.
Chen, Tianqi, Emily Fox, and Carlos Guestrin. 2014. In Proceedings of the 31st International Conference on Machine Learning, 1683–91. Beijing, China: PMLR.
Chen, Xiaojun. 2012. Mathematical Programming 134 (1): 71–99.
Chen, Zaiwei, Shancong Mou, and Siva Theja Maguluri. 2021. arXiv.
Di Giovanni, Francesco, James Rowbottom, Benjamin P. Chamberlain, Thomas Markovich, and Michael M. Bronstein. 2022. arXiv.
Domingos, Pedro. 2020. arXiv:2012.00152 [Cs, Stat], November.
Duchi, John, Elad Hazan, and Yoram Singer. 2011. Journal of Machine Learning Research 12 (Jul): 2121–59.
Friedlander, Michael P., and Mark Schmidt. 2012. SIAM Journal on Scientific Computing 34 (3): A1380–1405.
Ghadimi, Saeed, and Guanghui Lan. 2013a. SIAM Journal on Optimization 23 (4): 2341–68.
———. 2013b. arXiv:1310.3787 [Math], October.
Goh, Gabriel. 2017. Distill 2 (4): e6.
Hazan, Elad, Kfir Levy, and Shai Shalev-Shwartz. 2015. In Advances in Neural Information Processing Systems 28, edited by C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, 1594–1602. Curran Associates, Inc.
Heyde, C. C. 1974. Stochastic Processes and Their Applications 2 (4): 359–70.
Hu, Chonghai, Weike Pan, and James T. Kwok. 2009. In Advances in Neural Information Processing Systems, 781–89. Curran Associates, Inc.
Jakovetic, D., J.M. Freitas Xavier, and J.M.F. Moura. 2014. IEEE Transactions on Signal Processing 62 (4): 868–82.
Kingma, Diederik, and Jimmy Ba. 2015. Proceeding of ICLR.
Lai, Tze Leung. 2003. The Annals of Statistics 31 (2): 391–406.
Lee, Jason D., Ioannis Panageas, Georgios Piliouras, Max Simchowitz, Michael I. Jordan, and Benjamin Recht. 2017. arXiv:1710.07406 [Cs, Math, Stat], October.
Lee, Jason D., Max Simchowitz, Michael I. Jordan, and Benjamin Recht. 2016. arXiv:1602.04915 [Cs, Math, Stat], March.
Liu, Qiang, and Dilin Wang. 2019. In Advances In Neural Information Processing Systems.
Ljung, Lennart, Georg Pflug, and Harro Walk. 1992. Stochastic Approximation and Optimization of Random Systems. Basel: Birkhäuser.
Ma, Siyuan, and Mikhail Belkin. 2017. arXiv:1703.10622 [Cs, Stat], March.
Maclaurin, Dougal, David Duvenaud, and Ryan P. Adams. 2015. In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, 1070–77. arXiv.
Mairal, Julien. 2013. In Advances in Neural Information Processing Systems, 2283–91.
Mandt, Stephan, Matthew D. Hoffman, and David M. Blei. 2017. JMLR, April.
McMahan, H. Brendan, Gary Holt, D. Sculley, Michael Young, Dietmar Ebner, Julian Grady, Lan Nie, et al. 2013. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1222–30. KDD ’13. New York, NY, USA: ACM.
Mitliagkas, Ioannis, Ce Zhang, Stefan Hadjis, and Christopher Ré. 2016. arXiv:1605.09774 [Cs, Math, Stat], May.
Neu, Gergely, Gintare Karolina Dziugaite, Mahdi Haghifam, and Daniel M. Roy. 2021. arXiv:2102.00931 [Cs, Stat], August.
Nguyen, Lam M., Jie Liu, Katya Scheinberg, and Martin Takáč. 2017. arXiv:1705.07261 [Cs, Math, Stat], May.
Patel, Vivak. 2017. arXiv:1702.00317 [Cs, Math, Stat], February.
Polyak, B. T., and A. B. Juditsky. 1992. SIAM Journal on Control and Optimization 30 (4): 838–55.
Reddi, Sashank J., Ahmed Hefny, Suvrit Sra, Barnabas Poczos, and Alex Smola. 2016. In PMLR, 1603:314–23.
Robbins, Herbert, and Sutton Monro. 1951. The Annals of Mathematical Statistics 22 (3): 400–407.
Robbins, H., and D. Siegmund. 1971. In Optimizing Methods in Statistics, edited by Jagdish S. Rustagi, 233–57. Academic Press.
Ruder, Sebastian. 2016. arXiv:1609.04747 [Cs], September.
Sagun, Levent, V. Ugur Guney, Gerard Ben Arous, and Yann LeCun. 2014. arXiv:1412.6615 [Cs, Stat], December.
Salimans, Tim, and Diederik P Kingma. 2016. In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 901–1. Curran Associates, Inc.
Shalev-Shwartz, Shai, and Ambuj Tewari. 2011. Journal of Machine Learning Research 12 (July): 1865–92.
Şimşekli, Umut, Ozan Sener, George Deligiannidis, and Murat A. Erdogdu. 2020. CoRR abs/2006.09313.
Smith, Samuel L., Benoit Dherin, David Barrett, and Soham De. 2020. In.
Spall, J. C. 2000. IEEE Transactions on Automatic Control 45 (10): 1839–53.
Vishwanathan, S.V. N., Nicol N. Schraudolph, Mark W. Schmidt, and Kevin P. Murphy. 2006. “Accelerated Training of Conditional Random Fields with Stochastic Gradient Methods.” In Proceedings of the 23rd International Conference on Machine Learning.
Welling, Max, and Yee Whye Teh. 2011. In Proceedings of the 28th International Conference on International Conference on Machine Learning, 681–88. ICML’11. Madison, WI, USA: Omnipress.
Wright, Stephen J., and Benjamin Recht. 2021. Optimization for Data Analysis. New York: Cambridge University Press.
Xu, Wei. 2011. arXiv:1107.2490 [Cs], July.
Zhang, Xiao, Lingxiao Wang, and Quanquan Gu. 2017. arXiv:1701.00481 [Stat], January.
Zinkevich, Martin, Markus Weimer, Lihong Li, and Alex J. Smola. 2010. In Advances in Neural Information Processing Systems 23, edited by J. D. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, and A. Culotta, 2595–2603. Curran Associates, Inc.

### No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.