Gradient descent, a classic first order optimisation], with many variants, and many things one might wish to understand.

There are only few things I wish to understand for the moment.

Very tidy introduction in Anupam Guptaβs notes for 15-850: CMU Advanced Algorithms, Fall 2020, in particular Lectures 18 and 19.

## Coordinate descent

Descent each coordinate individually.

Small clever hack for certain domains: log gradient descent.

## Momentum

Polyak momentum (thatβs the heavy ball one, right?), Nesterov momentum.

How and when does it work? and how well? Moritz Hardt, The zen of gradient descent explains it through Chebychev polynomials. Cheng-Soon Ong recommends dβAspremont, Scieur, and Taylor (2021) as an overview. Gabriel Goh, Why Momentum Really Works (Goh 2017) is an incredible illustrated guide.

Sebastian Bubeck explains it from a different angle, Revisiting Nesterovβs Acceleration to expand upon the rather magical introduction given in his lecture Wibisono et al explain it in terms of variational approximation. See also Accelerated gradient descent 1 and 2.

Trung Vuβs Convergence of Heavy-Ball Method and Nesterovβs Accelerated Gradient on Quadratic Optimization differentiates Nesterov momentum from heavy ball momentum.

## Continuous approximations of iterations

Recent papers (Wibisono and Wilson 2015; Wibisono, Wilson, and Jordan 2016) argue that the discrete time steps can be viewed as a discrete approximation to a continuous time ODE which approaches the optimum (which in itself is trivial), but moreover that many algorithms fit into the same families of ODEs, that these ODEs explain Nesterov acceleration and generate new, improved optimisation methods. (which is not trivial.)

π

## Online versus stochastic

Technically, βonlineβ optimisation in, say, bandit/RL problems might imply that you have to βminimise regret onlineβ, which has a slightly different meaning and, e.g. involves seeing each training only as it arrives along some notional arrow of time, yet wishing to make the βbestβ decision at the next time, and possibly choosing your next experiment in order to trade-off exploration versus exploitation etc.

In SGD you can see your data as often as you want and in whatever order, but you only look at a bit at a time. Usually the data is given and predictions make no difference to what information is available to you.

Some of the same technology pops up in each of these notions of online optimisation, but I am really thinking about SGD here.

There are many more permutations and variations used in practice.

## Conditional Gradient

a.k.a. Frank-Wolfe algorithm: Donβt know much about this.

## Mirror descent

See mirror descent.

## References

*Journal of Machine Learning Research*15 (1): 1111β33.

*Advances in Neural Information Processing Systems 29*, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 1606β14. Curran Associates, Inc.

*arXiv:1901.02871 [Cs, Math, Stat]*, January.

*Mathematical Programming Computation*11 (1): 1β36.

*arXiv:2101.09545 [Cs, Math]*, January.

*Operations Research Letters*31 (3): 167β75.

*SIAM Journal on Imaging Sciences*2 (1): 183β202.

*arXiv:1802.03653 [Stat]*, February.

*arXiv:1607.01981 [Cs, Stat]*, July.

*Convex Optimization: Algorithms and Complexity*. Vol. 8. Foundations and Trends in Machine Learning. Now Publishers.

*The Five Miracles of Mirror Descent*.

*Mathematical Programming*134 (1): 71β99.

*Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics*, 192β204.

*Advances in Neural Information Processing Systems 27*.

*Acta Numerica*7 (January): 51β150.

*arXiv:2012.00152 [Cs, Stat]*, November.

*Distill*2 (4): e6.

*IEEE Transactions on Signal Processing*62 (4): 868β82.

*Advances in Neural Information Processing Systems 21*, edited by D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, 905β12. Curran Associates, Inc.

*arXiv:1710.07406 [Cs, Math, Stat]*, October.

*arXiv:1602.04915 [Cs, Math, Stat]*, March.

*arXiv:1703.10622 [Cs, Stat]*, March.

*JMLR*, April.

*SIAM Journal on Optimization*22 (2): 341β62.

*Mathematical Programming*140 (1): 125β61.

*Numerical Optimization*. 2nd ed. Springer Series in Operations Research and Financial Engineering. New York: Springer-Verlag.

*arXiv:2101.04968 [Cs, Math, Stat]*, June.

*arXiv:1609.04747 [Cs]*, September.

*arXiv:1412.6615 [Cs, Stat]*, December.

*Annual Review of Statistics and Its Application*1 (1): 233β53.

*arXiv:1509.03616 [Math]*, September.

*Proceedings of the National Academy of Sciences*113 (47): E7351β58.

*Optimization for Data Analysis*. New York: Cambridge University Press.

*Proceedings of the Twentieth International Conference on International Conference on Machine Learning*, 928β35. ICMLβ03. Washington, DC, USA: AAAI Press.

## No comments yet. Why not leave one?