Gradient descent, Newton-like, stochastic

January 23, 2020 — December 9, 2021

functional analysis
neural nets
optimization
SDEs
stochastic processes

\[\renewcommand{\var}{\operatorname{Var}} \renewcommand{\dd}{\mathrm{d}} \renewcommand{\bb}[1]{\mathbb{#1}} \renewcommand{\vv}[1]{\boldsymbol{#1}} \renewcommand{\rv}[1]{\mathsf{#1}} \renewcommand{\vrv}[1]{\vv{\rv{#1}}} \renewcommand{\gvn}{\mid} \renewcommand{\Ex}{\mathbb{E}} \renewcommand{\Pr}{\mathbb{P}}\]

Figure 1

Stochastic Newton-type optimization, unlike deterministic Newton optimisation, uses noisy (possibly approximate) 2nd-order gradient information to find the argument which minimises \[ x^*=\operatorname{arg min}_{\mathbf{x}} f(x) \] for some an objective function \(f:\mathbb{R}^n\to\mathbb{R}\).

1 Subsampling data

Most of the good tricks here are set up for ML-style training losses where the bottleneck is summing a large number of loss functions.

LiSSA attempts to make 2nd order gradient descent methods scale to large parameter sets (Agarwal, Bullins, and Hazan 2016):

a linear time stochastic second order algorithm that achieves linear convergence for typical problems in machine learning while still maintaining run-times theoretically comparable to state-of-the-art first order algorithms. This relies heavily on the special structure of the optimization problem that allows our unbiased hessian estimator to be implemented efficiently, using only vector-vector products.

David McAllester observes:

Since \(H^{t+1}y^t\) can be computed efficiently whenever we can run backpropagation, the conditions under which the LiSSA algorithm can be run are actually much more general than the paper suggests. Backpropagation can be run on essentially any natural loss function.

(Kovalev, Mishchenko, and Richtárik 2019) uses a decomposition of the objective into a sum of simple functions (the classic SGD setup for neural nets, typical of an online optimisation.

What do (F. Bach and Moulines 2011; F. R. Bach and Moulines 2013) get us?

Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations (‘large n’) and each of these is large (‘large p’). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes over the data. In this talk, I will show how the smoothness of loss functions may be used to design novel algorithms with improved behavior, both in theory and practice: in the ideal infinite-data setting, an efficient novel Newton-based stochastic approximation algorithm leads to a convergence rate of O(1/n) without strong convexity assumptions, while in the practical finite-data setting, an appropriate combination of batch and online algorithms leads to unexpected behaviors, such as a linear convergence rate for strongly convex problems, with an iteration cost similar to stochastic gradient descent. (joint work with Nicolas Le Roux, Eric Moulines and Mark Schmidt).

2 General case

Rather than observing \(\nabla f, \nabla^2 f\) we observe some random variables \(G(x),H(x)\) with \(\bb{E}G=\nabla f\) and \(\bb{E}(H)=\nabla^2 f,\) not necessarily decomposable into a sum.

🏗

3 Online Newton’s method

Bookmarked: I am sitting in Iman Shames’ seminar on some recent papers (Gravell, Shames, and Summers 2021; Lesage-Landry, Taylor, and Shames 2021; Pavlov, Shames, and Manzie 2020):

In this talk we revisit one of the most important workhorses of numerical optimisation: the Newton’s method. We will (hopefully) provide some insight into the role it still plays in modern applications of control and learning theory. We first look into its regret analysis when applied to online nonconvex optimisation problems, then we shift our focus to how it and its very close relative, the midpoint Newton’s method, play leading roles in learning dynamical systems, and conclude with reviewing how it acts as a hero in disguise in solving differential dynamic programming problems.

Looks worthwhile.

Online Newton’s Method (ONM): \(\quad x_{t+1}=x_{t}-H_{t}^{-1}\left(x_{t}\right) \nabla f_{t}\left(x_{t}\right)\)

Let \(x_{t+1}^{*}=x_{t}^{*}+v_{t}\) with \(\max _{t}\left\|v_{t}\right\| \leq \bar{v}\) and \(V_{T}=\sum_{t=1}^{T}\left\|x_{t+1}^{*}-x_{t}^{*}\right\|=\sum_{t=1}^{T}\left\|v_{t}\right\|\) Some Assumptions \[ \begin{aligned} \exists h_{t}>0: &\left\|H_{t}^{-1}\left(x_{t}^{*}\right)\right\| \leq \frac{1}{h_{t}} \\ \exists \beta_{t}, L_{t}>0: &\left\|x-x_{t}^{*}\right\| \leq \beta_{t}=\left\|H_{t}(x)-H_{t}\left(x_{t}^{*}\right)\right\| \leq L_{t}\left\|x-x_{t}^{*}\right\| \\ \left\|x_{t}-x_{t}^{*}\right\| & \leq \gamma_{t}=\min \left[\beta_{t}, \frac{2 h_{t}}{3 L_{t}}\right] \end{aligned} \] Lemma \[ \left((\mathrm{ONM}) \wedge\left(\bar{v} \leq \gamma-\frac{3 L}{2 h} \gamma^{2}\right)\right) \Longrightarrow\left(\left\|x_{t+1}-x_{t+1}^{*}\right\|<\gamma\right) \] Another assumption: \(\exists \ell>0\) such that \(\left\|x-x_{t}^{*}\right\| \leq \gamma \Longrightarrow\left|f_{t}(x)-f_{t}\left(x_{t}^{*}\right)\right| \leq\left\|x-x_{t}^{*}\right\|\) for all \(t=1,2, \ldots, T\). Theorem The regret of ONM is bounded above: \[ \operatorname{Reg}_{T}^{D, 0 N M} \leq \frac{\ell}{1-\frac{3 L}{2 h} \gamma}\left(V_{T}+\delta\right) \] where \(\delta=\frac{3 L}{2 h}\left(\left\|x_{0}-x_{0}^{*}\right\|^{2}-\left\|x_{T}-x_{T}^{*}\right\|^{2}\right)\)

It is the same as the regret for strongly-convex functions.

Take-home message seems to be Newton’s methods are efficient for sufficiently online control problems rather as you would expect. It is not clear to me that getting a decent Quasi-Newton method is clear or obvious?

4 Subsampling parameters

Hu et al. (2022):

The success of deep learning comes at a tremendous computational and energy cost, and the scalability of training massively overparametrized neural networks is convergence rate in non-convex settings, both in theory and practice.

To mitigate this cost, recent works have proposed to employ alternative (Newton-type) training methods with much faster convergence rate, albeit with higher cost-periteration. For a typical neural network with \(m=\operatorname{poly}(n)\) parameters and input batch of \(n\) datapoints in \(\mathbb{R}^{d}\), the previous work of [Brand, Peng, Song, and Weinstein, ITCS’2021] requires \(\sim m n d+n^{3}\) time per iteration. In this paper, we present a novel training method that requires only \(m^{1-\alpha} n d+n^{3}\) amortized time in the same DNNs. This method relies on a new and alternative view of neural networks, as a set of binary search trees, where each iteration corresponds to modifying a small subset of the nodes in the tree. We believe this view would have further applications in the design and analysis of DNNs.

5 References

Abt, and Welch. 1998. Fisher Information and Maximum-Likelihood Estimation of Covariance Parameters in Gaussian Stochastic Processes.” Canadian Journal of Statistics.
Agarwal, Bullins, and Hazan. 2016. Second Order Stochastic Optimization in Linear Time.” arXiv:1602.03943 [Cs, Stat].
Amari. 1998. Natural Gradient Works Efficiently in Learning.” Neural Computation.
Amari, Karakida, and Oizumi. 2018. Fisher Information and Natural Gradient Learning of Random Deep Networks.” arXiv:1808.07172 [Cond-Mat, Stat].
Amari, Park, and Fukumizu. 2000. Adaptive Method of Realizing Natural Gradient Learning for Multilayer Perceptrons.” Neural Computation.
Arbel, Gretton, Li, et al. 2020. Kernelized Wasserstein Natural Gradient.”
Arnold, and Wang. 2017. Accelerating SGD for Distributed Deep-Learning Using Approximated Hessian Matrix.” In arXiv:1709.05069 [Cs].
Bach, Francis, and Moulines. 2011. Non-Asymptotic Analysis of Stochastic Approximation Algorithms for Machine Learning.” In Advances in Neural Information Processing Systems (NIPS).
Bach, Francis R., and Moulines. 2013. Non-Strongly-Convex Smooth Stochastic Approximation with Convergence Rate O(1/n).” In arXiv:1306.2119 [Cs, Math, Stat].
Ba, Grosse, and Martens. 2016. Distributed Second-Order Optimization Using Kronecker-Factored Approximations.”
Battiti. 1992. First-and Second-Order Methods for Learning: Between Steepest Descent and Newton’s Method.” Neural Computation.
Bordes, Bottou, and Gallinari. 2009. SGD-QN: Careful Quasi-Newton Stochastic Gradient Descent.” Journal of Machine Learning Research.
Botev, Ritter, and Barber. 2017. Practical Gauss-Newton Optimisation for Deep Learning.” In Proceedings of the 34th International Conference on Machine Learning.
Bottou. 2012. Stochastic Gradient Descent Tricks.” In Neural Networks: Tricks of the Trade. Lecture Notes in Computer Science.
Byrd, Hansen, Nocedal, et al. 2016. A Stochastic Quasi-Newton Method for Large-Scale Optimization.” SIAM Journal on Optimization.
Cho, Dhir, and Lee. 2015. Hessian-Free Optimization for Learning Deep Multidimensional Recurrent Neural Networks.” In Advances In Neural Information Processing Systems.
Dangel, Kunstner, and Hennig. 2019. BackPACK: Packing More into Backprop.” In International Conference on Learning Representations.
Dauphin, Pascanu, Gulcehre, et al. 2014. Identifying and Attacking the Saddle Point Problem in High-Dimensional Non-Convex Optimization.” In Advances in Neural Information Processing Systems 27.
Detommaso, Cui, Spantini, et al. 2018. A Stein Variational Newton Method.” In Proceedings of the 32nd International Conference on Neural Information Processing Systems. NIPS’18.
Efron, and Hinkley. 1978. Assessing the Accuracy of the Maximum Likelihood Estimator: Observed Versus Expected Fisher Information.” Biometrika.
Gravell, Shames, and Summers. 2021. Approximate Midpoint Policy Iteration for Linear Quadratic Control.” arXiv:2011.14212 [Cs, Eess, Math].
Grosse. 2021. Metrics.” In CSC2541 Winter 2021.
Grosse, and Martens. 2016. A Kronecker-Factored Approximate Fisher Matrix for Convolution Layers.” In Proceedings of The 33rd International Conference on Machine Learning.
Hensman, Rattray, and Lawrence. 2012. Fast Variational Inference in the Conjugate Exponential Family.” In Advances in Neural Information Processing Systems.
Hu, Song, Weinstein, et al. 2022. Training Overparametrized Neural Networks in Sublinear Time.”
Kakade. 2002. “A Natural Policy Gradient.” In Advances In Neural Information Processing Systems.
Karakida, and Osawa. 2020. Understanding Approximate Fisher Information for Fast Convergence of Natural Gradient Descent in Wide Neural Networks.” Advances in Neural Information Processing Systems.
Khan, and Rue. 2023. The Bayesian Learning Rule.”
Kovalev, Mishchenko, and Richtárik. 2019. Stochastic Newton and Cubic Newton Methods with Simple Local Linear-Quadratic Rates.” arXiv:1912.01597 [Cs, Math, Stat].
Lesage-Landry, Taylor, and Shames. 2021. Second-Order Online Nonconvex Optimization.” IEEE Transactions on Automatic Control.
Ljung, Pflug, and Walk. 1992. Stochastic Approximation and Optimization of Random Systems.
Lucchi, McWilliams, and Hofmann. 2015. A Variance Reduced Stochastic Newton Method.” arXiv:1503.08316 [Cs].
Ly, Marsman, Verhagen, et al. 2017. A Tutorial on Fisher Information.” Journal of Mathematical Psychology.
Martens. 2010. Deep Learning via Hessian-Free Optimization.” In Proceedings of the 27th International Conference on International Conference on Machine Learning. ICML’10.
———. 2016. Second-Order Optimization for Neural Networks.”
———. 2020. New Insights and Perspectives on the Natural Gradient Method.” Journal of Machine Learning Research.
Martens, and Grosse. 2015. Optimizing Neural Networks with Kronecker-Factored Approximate Curvature.” In Proceedings of the 32nd International Conference on Machine Learning.
Martens, and Sutskever. 2011. Learning Recurrent Neural Networks with Hessian-Free Optimization.” In Proceedings of the 28th International Conference on International Conference on Machine Learning. ICML’11.
———. 2012. Training Deep and Recurrent Networks with Hessian-Free Optimization.” In Neural Networks: Tricks of the Trade. Lecture Notes in Computer Science.
Mosegaard, and Tarantola. 2002. Probabilistic Approach to Inverse Problems.” In International Geophysics.
Nielsen. 2018. An Elementary Introduction to Information Geometry.” arXiv:1808.08271 [Cs, Math, Stat].
Nurbekyan, Lei, and Yang. 2022. Efficient Natural Gradient Descent Methods for Large-Scale Optimization Problems.”
Ollivier. 2017. Online Natural Gradient as a Kalman Filter.” arXiv:1703.00209 [Math, Stat].
Osawa, Ishikawa, Yokota, et al. 2023. ASDL: A Unified Interface for Gradient Preconditioning in PyTorch.”
Pavlov, Shames, and Manzie. 2020. Interior Point Differential Dynamic Programming.” arXiv:2004.12710 [Cs, Eess, Math].
Robbins, and Siegmund. 1971. A Convergence Theorem for Non Negative Almost Supermartingales and Some Applications.” In Optimizing Methods in Statistics.
Ruppert. 1985. A Newton-Raphson Version of the Multivariate Robbins-Monro Procedure.” The Annals of Statistics.
Salimbeni, Eleftheriadis, and Hensman. 2018. Natural Gradients in Practice: Non-Conjugate Variational Inference in Gaussian Process Models.” In International Conference on Artificial Intelligence and Statistics.
Schraudolph. 2002. Fast Curvature Matrix-Vector Products for Second-Order Gradient Descent.” Neural Computation.
Schraudolph, Yu, and Günter. 2007. A Stochastic Quasi-Newton Method for Online Convex Optimization.” In Artificial Intelligence and Statistics.
Wilkinson, Särkkä, and Solin. 2021. Bayes-Newton Methods for Approximate Bayesian Inference with PSD Guarantees.”
Yao, Gholami, Keutzer, et al. 2020. PyHessian: Neural Networks Through the Lens of the Hessian.” In arXiv:1912.07145 [Cs, Math].
Yurtsever, Tropp, Fercoq, et al. 2021. Scalable Semidefinite Programming.” SIAM Journal on Mathematics of Data Science.
Zellner. 1988. Optimal Information Processing and Bayes’s Theorem.” The American Statistician.
Zhang, Sun, Duvenaud, et al. 2018. Noisy Natural Gradient as Variational Inference.” In Proceedings of the 35th International Conference on Machine Learning.