Deep learning as a dynamical system



Image: Donny Darko

A thread of thought within neural network learning research which tries to render the learning of prediction functions tractable, or comprehensible, by considering them as dynamical systems, and using the theory of stability in the context of Hamiltonians, optimal control and/or DE solvers, to make it all work.

I’ve been interested by this since seeing the (Haber and Ruthotto 2018) paper, but it’s got a kick from T. Q. Chen et al. (2018) won the prize at NeurIPS for directly learning the ODEs themselves, through related methods, which makes the whole thing look more useful.

Convnets/Resnets as discrete PDE approximations

Arguing that neural networks are in the limit approximants to quadrature solutions of certain ODES can get a new perspective on how these things work, and also suggests certain ODE tricks might be imported. This is mostly what Haber and Rutthoto et al do. “Stability of training” is a useful outcome here, guaranteeing that gradient signals are available by ensuring the network preserves energy as the energy propagates through layers (Haber and Ruthotto 2018; Haber et al. 2017; Chang et al. 2018; Ruthotto and Haber 2018). They mean stability in the sense of energy-preserving operators or stability in linear systems. (Different: input-stability in learning.) c.f. Wiatowski, Grohs, and Bölcskei (2018).

Another fun trick from this toolbox is the ability to interpolate and discretize resnets, re-sampling the layers and weights themselves, by working out a net which solves the same discretized SDE. This essentially, AFAICT, allows one to upscale and downscale nets and/or the training data through their infinite-resolution limits. That sounds cool but I have not seen so much of it. Is the complexity in practice worth it?

References

Andersson, Joel A. E., Joris Gillis, Greg Horn, James B. Rawlings, and Moritz Diehl. 2019. “CasADi: A Software Framework for Nonlinear Optimization and Optimal Control.” Mathematical Programming Computation 11 (1): 1–36. https://doi.org/10.1007/s12532-018-0139-4.
Anil, Cem, James Lucas, and Roger Grosse. 2018. “Sorting Out Lipschitz Function Approximation,” November. https://arxiv.org/abs/1811.05381v1.
Arjovsky, Martin, Amar Shah, and Yoshua Bengio. 2016. “Unitary Evolution Recurrent Neural Networks.” In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, 1120–28. ICML’16. New York, NY, USA: JMLR.org. http://arxiv.org/abs/1511.06464.
Babtie, Ann C., Paul Kirk, and Michael P. H. Stumpf. 2014. “Topological Sensitivity Analysis for Systems Biology.” Proceedings of the National Academy of Sciences 111 (52): 18507–12. https://doi.org/10.1073/pnas.1414026112.
Chang, Bo, Lili Meng, Eldad Haber, Lars Ruthotto, David Begert, and Elliot Holtham. 2018. “Reversible Architectures for Arbitrarily Deep Residual Neural Networks.” In arXiv:1709.03698 [cs, Stat]. http://arxiv.org/abs/1709.03698.
Chen, Tian Qi, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. 2018. “Neural Ordinary Differential Equations.” In Advances in Neural Information Processing Systems 31, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, 6572–83. Curran Associates, Inc. http://papers.nips.cc/paper/7892-neural-ordinary-differential-equations.pdf.
Chen, Tianqi, Ian Goodfellow, and Jonathon Shlens. 2015. “Net2Net: Accelerating Learning via Knowledge Transfer.” arXiv:1511.05641 [cs], November. http://arxiv.org/abs/1511.05641.
Choromanski, Krzysztof, Jared Quincy Davis, Valerii Likhosherstov, Xingyou Song, Jean-Jacques Slotine, Jacob Varley, Honglak Lee, Adrian Weller, and Vikas Sindhwani. 2020. “An Ode to an ODE.” In Advances in Neural Information Processing Systems. Vol. 33. http://arxiv.org/abs/2006.11421.
Course, Kevin, Trefor Evans, and Prasanth Nair. 2020. “Weak Form Generalized Hamiltonian Learning.” In Advances in Neural Information Processing Systems. Vol. 33. https://proceedings.neurips.cc//paper_files/paper/2020/hash/d93c96e6a23fff65b91b900aaa541998-Abstract.html.
E, Weinan. 2017. “A Proposal on Machine Learning via Dynamical Systems.” Communications in Mathematics and Statistics 5 (1): 1–11. https://doi.org/10.1007/s40304-017-0103-z.
E, Weinan, Jiequn Han, and Qianxiao Li. 2018. “A Mean-Field Optimal Control Formulation of Deep Learning.” arXiv:1807.01083 [cs, Math], July. http://arxiv.org/abs/1807.01083.
Głuch, Grzegorz, and Rüdiger Urbanke. 2021. “Noether: The More Things Change, the More Stay the Same.” arXiv:2104.05508 [cs, Stat], April. http://arxiv.org/abs/2104.05508.
Haber, Eldad, Keegan Lensink, Eran Treister, and Lars Ruthotto. 2019. “IMEXnet A Forward Stable Deep Neural Network.” In International Conference on Machine Learning, 2525–34. PMLR. http://proceedings.mlr.press/v97/haber19a.html.
Haber, Eldad, Felix Lucka, and Lars Ruthotto. 2018. “Never Look Back - A Modified EnKF Method and Its Application to the Training of Neural Networks Without Back Propagation.” arXiv:1805.08034 [cs, Math], May. http://arxiv.org/abs/1805.08034.
Haber, Eldad, and Lars Ruthotto. 2018. “Stable Architectures for Deep Neural Networks.” Inverse Problems 34 (1): 014004. https://doi.org/10.1088/1361-6420/aa9a90.
Haber, Eldad, Lars Ruthotto, Elliot Holtham, and Seong-Hwan Jun. 2017. “Learning Across Scales - A Multiscale Method for Convolution Neural Networks.” arXiv:1703.02009 [cs], March. http://arxiv.org/abs/1703.02009.
Han, Jiequn, Arnulf Jentzen, and Weinan E. 2018. “Solving High-Dimensional Partial Differential Equations Using Deep Learning.” Proceedings of the National Academy of Sciences 115 (34): 8505–10. https://doi.org/10.1073/pnas.1718942115.
Hardt, Moritz, Benjamin Recht, and Yoram Singer. 2015. “Train Faster, Generalize Better: Stability of Stochastic Gradient Descent.” arXiv:1509.01240 [cs, Math, Stat], September. http://arxiv.org/abs/1509.01240.
Haro, A. 2008. “Automatic Differentiation Methods in Computational Dynamical Systems: Invariant Manifolds and Normal Forms of Vector Fields at Fixed Points.” IMA Note. http://www.maia.ub.es/~alex/admcds/admcds.pdf.
He, Junxian, Daniel Spokoyny, Graham Neubig, and Taylor Berg-Kirkpatrick. 2019. “Lagging Inference Networks and Posterior Collapse in Variational Autoencoders.” In PRoceedings of ICLR. http://arxiv.org/abs/1901.05534.
Huh, In, Eunho Yang, Sung Ju Hwang, and Jinwoo Shin. 2020. “Time-Reversal Symmetric ODE Network.” In Advances in Neural Information Processing Systems. Vol. 33. https://proceedings.neurips.cc//paper_files/paper/2020/hash/db8419f41d890df802dca330e6284952-Abstract.html.
Jing, Li, Yichen Shen, Tena Dubcek, John Peurifoy, Scott Skirlo, Yann LeCun, Max Tegmark, and Marin Soljačić. 2017. “Tunable Efficient Unitary Neural Networks (EUNN) and Their Application to RNNs.” In PMLR, 1733–41. http://proceedings.mlr.press/v70/jing17a.html.
Kolter, J Zico, and Gaurav Manek. 2019. “Learning Stable Deep Dynamics Models.” In Advances in Neural Information Processing Systems, 9. http://arxiv.org/abs/2001.06116.
Lawrence, Nathan, Philip Loewen, Michael Forbes, Johan Backstrom, and Bhushan Gopaluni. 2020. “Almost Surely Stable Deep Dynamics.” In Advances in Neural Information Processing Systems. Vol. 33. https://proceedings.neurips.cc//paper_files/paper/2020/hash/daecf755df5b1d637033bb29b319c39a-Abstract.html.
Massaroli, Stefano, Michael Poli, Michelangelo Bin, Jinkyoo Park, Atsushi Yamashita, and Hajime Asama. 2020. “Stable Neural Flows.” arXiv:2003.08063 [cs, Math, Stat], March. http://arxiv.org/abs/2003.08063.
Meng, Qi, Yue Wang, Wei Chen, Taifeng Wang, Zhi-Ming Ma, and Tie-Yan Liu. 2016. “Generalization Error Bounds for Optimization Algorithms via Stability.” In arXiv:1609.08397 [stat], 10:441–74. http://arxiv.org/abs/1609.08397.
Mhammedi, Zakaria, Andrew Hellicar, Ashfaqur Rahman, and James Bailey. 2017. “Efficient Orthogonal Parametrisation of Recurrent Neural Networks Using Householder Reflections.” In PMLR, 2401–9. http://proceedings.mlr.press/v70/mhammedi17a.html.
Nguyen, Long, and Andy Malinsky. 2020. “Exploration and Implementation of Neural Ordinary Differential Equations,” 34.
Poli, Michael, Stefano Massaroli, Atsushi Yamashita, Hajime Asama, and Jinkyoo Park. 2020. “Hypersolvers: Toward Fast Continuous-Depth Models.” In Advances in Neural Information Processing Systems. Vol. 33. https://proceedings.neurips.cc//paper/2020/hash/f1686b4badcf28d33ed632036c7ab0b8-Abstract.html.
Rackauckas, Christopher. 2019. “The Essential Tools of Scientific Machine Learning (Scientific ML).” The Winnower, August. https://doi.org/10.15200/winn.156631.13064.
Rackauckas, Christopher, Yingbo Ma, Vaibhav Dixit, Xingjian Guo, Mike Innes, Jarrett Revels, Joakim Nyberg, and Vijay Ivaturi. 2018. “A Comparison of Automatic Differentiation and Continuous Sensitivity Analysis for Derivatives of Differential Equation Solutions.” arXiv:1812.01892 [cs], December. http://arxiv.org/abs/1812.01892.
Roeder, Geoffrey, Paul K. Grant, Andrew Phillips, Neil Dalchau, and Edward Meeds. 2019. “Efficient Amortised Bayesian Inference for Hierarchical and Nonlinear Dynamical Systems.” arXiv:1905.12090 [cs, Stat], May. http://arxiv.org/abs/1905.12090.
Ruthotto, Lars, and Eldad Haber. 2018. “Deep Neural Networks Motivated by Partial Differential Equations.” arXiv:1804.04272 [cs, Math, Stat], April. http://arxiv.org/abs/1804.04272.
Saemundsson, Steindor, Alexander Terenin, Katja Hofmann, and Marc Peter Deisenroth. 2020. “Variational Integrator Networks for Physically Structured Embeddings.” arXiv:1910.09349 [cs, Stat], March. http://arxiv.org/abs/1910.09349.
Schoenholz, Samuel S., Justin Gilmer, Surya Ganguli, and Jascha Sohl-Dickstein. 2016. “Deep Information Propagation,” November. https://arxiv.org/abs/1611.01232v2.
Şimşekli, Umut, Ozan Sener, George Deligiannidis, and Murat A. Erdogdu. 2020. “Hausdorff Dimension, Stochastic Differential Equations, and Generalization in Neural Networks.” arXiv:2006.09313 [cs, Stat], June. http://arxiv.org/abs/2006.09313.
Vorontsov, Eugene, Chiheb Trabelsi, Samuel Kadoury, and Chris Pal. 2017. “On Orthogonality and Learning Recurrent Networks with Long Term Dependencies.” In PMLR, 3570–78. http://proceedings.mlr.press/v70/vorontsov17a.html.
Wang, Chuang, Hong Hu, and Yue M. Lu. 2019. “A Solvable High-Dimensional Model of GAN.” arXiv:1805.08349 [cond-Mat, Stat], October. http://arxiv.org/abs/1805.08349.
Wiatowski, Thomas, and Helmut Bölcskei. 2015. “A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction.” In Proceedings of IEEE International Symposium on Information Theory. http://arxiv.org/abs/1512.06293.
Wiatowski, Thomas, Philipp Grohs, and Helmut Bölcskei. 2018. “Energy Propagation in Deep Convolutional Neural Networks.” IEEE Transactions on Information Theory 64 (7): 1–1. https://doi.org/10.1109/TIT.2017.2756880.
Yıldız, Çağatay, Markus Heinonen, and Harri Lähdesmäki. 2019. “ODE\(^2\)VAE: Deep Generative Second Order ODEs with Bayesian Neural Networks.” arXiv:1905.10994 [cs, Stat], October. http://arxiv.org/abs/1905.10994.
Zammit-Mangion, Andrew, and Christopher K. Wikle. 2020. “Deep Integro-Difference Equation Models for Spatio-Temporal Forecasting.” Spatial Statistics 37 (June): 100408. https://doi.org/10.1016/j.spasta.2020.100408.
Zhang, Han, Xi Gao, Jacob Unterman, and Tom Arodz. 2020. “Approximation Capabilities of Neural ODEs and Invertible Residual Networks.” arXiv:1907.12998 [cs, Stat], February. http://arxiv.org/abs/1907.12998.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.