Stochastic Newton-type optimization, unlike deterministic Newton optimisation, uses noisy (possibly approximate) 2nd-order gradient information to find the argument which minimises

\[ x^*=\operatorname{argmin}_{\mathbf{x}} f(x) \]

for some an objective function \(f:\mathbb{R}^n\to\mathbb{R}\).

## Subsampling

Most of the good tricks here are set up for ML-style training losses where the bottleneck is summing a large number of loss functions.

LiSSA attempts to make 2nd order gradient descent methods scale to large parameter sets (Agarwal, Bullins, and Hazan 2016):

a linear time stochastic second order algorithm that achieves linear convergence for typical problems in machine learning while still maintaining run-times theoretically comparable to state-of-the-art first order algorithms. This relies heavily on the special structure of the optimization problem that allows our unbiased hessian estimator to be implemented efficiently, using only vector-vector products.

David McAllester observes:

Since \(H^{t+1}y^t\) can be computed efficiently whenever we can run backpropagation, the conditions under which the LiSSA algorithm can be run are actually much more general than the paper suggests. Backpropagation can be run on essentially any natural loss function.

(Kovalev, Mishchenko, and RichtĂˇrik 2019) uses a decomposition of the objective into a sum of simple functions (the classic SGD setup for neural nets, typical of an online optimisation.

What does Francis Bachâ€™s finite sample guarantee research get us in this setting? (Bach and Moulines 2011, 2013)

Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ('large n') and each of these is large ('large p'). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes over the data. In this talk, I will show how the smoothness of loss functions may be used to design novel algorithms with improved behavior, both in theory and practice: in the ideal infinite-data setting, an efficient novel Newton-based stochastic approximation algorithm leads to a convergence rate of O(1/n) without strong convexity assumptions, while in the practical finite-data setting, an appropriate combination of batch and online algorithms leads to unexpected behaviors, such as a linear convergence rate for strongly convex problems, with an iteration cost similar to stochastic gradient descent. (joint work with Nicolas Le Roux, Eric Moulines and Mark Schmidt).

## General case

Rather than observing \(\nabla f, \nabla^2 f\) we observe some random variables \(G(x),H(x)\) with \(bb{E}G=\nabla f\) and \(\bb{E}(H)=\nabla^2 f,\) not necessarily decomposable into a sum.

đźŹ—

For now, see optimisation of experiments which is an interesting application of this problem.

Agarwal, Naman, Brian Bullins, and Elad Hazan. 2016. â€śSecond Order Stochastic Optimization in Linear Time,â€ť February. http://arxiv.org/abs/1602.03943.

Arnold, SĂ©bastien M. R., and Chunming Wang. 2017. â€śAccelerating SGD for Distributed Deep-Learning Using Approximated Hessian Matrix.â€ť In. http://arxiv.org/abs/1709.05069.

Ba, Jimmy, Roger Grosse, and James Martens. 2016. â€śDistributed Second-Order Optimization Using Kronecker-Factored Approximations,â€ť November. https://openreview.net/forum?id=SkkTMpjex.

Bach, Francis, and Eric Moulines. 2011. â€śNon-Asymptotic Analysis of Stochastic Approximation Algorithms for Machine Learning.â€ť In *Advances in Neural Information Processing Systems (NIPS)*, â€“. Spain. http://hal.archives-ouvertes.fr/hal-00608041.

Bach, Francis R., and Eric Moulines. 2013. â€śNon-Strongly-Convex Smooth Stochastic Approximation with Convergence Rate O(1/N).â€ť In *arXiv:1306.2119 [Cs, Math, Stat]*, 773â€“81. https://arxiv.org/abs/1306.2119v1.

Battiti, Roberto. 1992. â€śFirst-and Second-Order Methods for Learning: Between Steepest Descent and Newtonâ€™s Method.â€ť *Neural Computation* 4 (2): 141â€“66. https://doi.org/10.1162/neco.1992.4.2.141.

Bordes, Antoine, LĂ©on Bottou, and Patrick Gallinari. 2009. â€śSGD-QN: Careful Quasi-Newton Stochastic Gradient Descent.â€ť *Journal of Machine Learning Research* 10 (December): 1737â€“54. http://jmlr.org/papers/volume10/bordes09a/bordes09a.pdf.

Bottou, LĂ©on. 2012. â€śStochastic Gradient Descent Tricks.â€ť In *Neural Networks: Tricks of the Trade*, 421â€“36. Lecture Notes in Computer Science. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-35289-8_25.

Byrd, R. H., S. L. Hansen, Jorge. Nocedal, and Y. Singer. 2016. â€śA Stochastic Quasi-Newton Method for Large-Scale Optimization.â€ť *SIAM Journal on Optimization* 26 (2): 1008â€“31. https://doi.org/10.1137/140954362.

Cho, Minhyung, Chandra Shekhar Dhir, and Jaehyung Lee. 2015. â€śHessian-Free Optimization for Learning Deep Multidimensional Recurrent Neural Networks.â€ť In *Advances in Neural Information Processing Systems*. http://arxiv.org/abs/1509.03475.

Dauphin, Yann, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. 2014. â€śIdentifying and Attacking the Saddle Point Problem in High-Dimensional Non-Convex Optimization.â€ť In *Advances in Neural Information Processing Systems 27*, 2933â€“41. Curran Associates, Inc. http://arxiv.org/abs/1406.2572.

Kovalev, Dmitry, Konstantin Mishchenko, and Peter RichtĂˇrik. 2019. â€śStochastic Newton and Cubic Newton Methods with Simple Local Linear-Quadratic Rates,â€ť December. http://arxiv.org/abs/1912.01597.

Lucchi, Aurelien, Brian McWilliams, and Thomas Hofmann. 2015. â€śA Variance Reduced Stochastic Newton Method,â€ť March. http://arxiv.org/abs/1503.08316.

Martens, James. 2010. â€śDeep Learning via Hessian-Free Optimization.â€ť In *Proceedings of the 27th International Conference on International Conference on Machine Learning*, 735â€“42. ICMLâ€™10. USA: Omnipress. http://www.cs.utoronto.ca/~jmartens/docs/Deep_HessianFree.pdf.

Martens, James, and Ilya Sutskever. 2011. â€śLearning Recurrent Neural Networks with Hessian-Free Optimization.â€ť In *Proceedings of the 28th International Conference on International Conference on Machine Learning*, 1033â€“40. ICMLâ€™11. USA: Omnipress. http://dl.acm.org/citation.cfm?id=3104482.3104612.

â€”â€”â€”. 2012. â€śTraining Deep and Recurrent Networks with Hessian-Free Optimization.â€ť In *Neural Networks: Tricks of the Trade*, 479â€“535. Lecture Notes in Computer Science. Springer. http://www.cs.toronto.edu/~jmartens/docs/HF_book_chapter.pdf.

Robbins, H., and D. Siegmund. 1971. â€śA Convergence Theorem for Non Negative Almost Supermartingales and Some Applications.â€ť In *Optimizing Methods in Statistics*, edited by Jagdish S. Rustagi, 233â€“57. Academic Press. https://doi.org/10.1016/B978-0-12-604550-5.50015-8.

Ruppert, David. 1985. â€śA Newton-Raphson Version of the Multivariate Robbins-Monro Procedure.â€ť *The Annals of Statistics* 13 (1): 236â€“45. https://doi.org/10.1214/aos/1176346589.

Schraudolph, Nicol N., Jin Yu, and Simon GĂĽnter. 2007. â€śA Stochastic Quasi-Newton Method for Online Convex Optimization.â€ť In *Artificial Intelligence and Statistics*, 436â€“43. http://proceedings.mlr.press/v2/schraudolph07a.html.