Gradient flows

infinitesimal optimization



Hinze et al. (2021) depict a mosquito’s gradient flow in a 3d optimisation problem.

Gradient flows we can think of a continuous-limit of gradient descent.

Stochastic

SGD (Ljung, Pflug, and Walk 1992; Mandt, Hoffman, and Blei 2017). Many super nice things are easy to prove using these bad boys, especially SGMCMC things. Worth the price of dusting off the old stochastic calculus.

References

Ambrosio, Luigi, and Nicola Gigli. 2013. A User’s Guide to Optimal Transport.” In Modelling and Optimisation of Flows on Networks: Cetraro, Italy 2009, Editors: Benedetto Piccoli, Michel Rascle, edited by Luigi Ambrosio, Alberto Bressan, Dirk Helbing, Axel Klar, and Enrique Zuazua, 1–155. Lecture Notes in Mathematics. Berlin, Heidelberg: Springer.
Ambrosio, Luigi, Nicola Gigli, and Giuseppe Savare. 2008. Gradient Flows: In Metric Spaces and in the Space of Probability Measures. 2nd ed. Lectures in Mathematics. ETH Zürich. Birkhäuser Basel.
Ancona, Marco, Enea Ceolini, Cengiz Öztireli, and Markus Gross. 2017. Towards Better Understanding of Gradient-Based Attribution Methods for Deep Neural Networks,” November.
Bartlett, Peter L., Andrea Montanari, and Alexander Rakhlin. 2021. Deep Learning: A Statistical Viewpoint.” Acta Numerica 30 (May): 87–201.
Chizat, Lénaïc, and Francis Bach. 2018. On the Global Convergence of Gradient Descent for over-Parameterized Models Using Optimal Transport.” In Proceedings of the 32nd International Conference on Neural Information Processing Systems, 3040–50. NIPS’18. Red Hook, NY, USA: Curran Associates Inc.
Chu, Casey, Kentaro Minami, and Kenji Fukumizu. 2022. The Equivalence Between Stein Variational Gradient Descent and Black-Box Variational Inference.” In, 5.
Di Giovanni, Francesco, James Rowbottom, Benjamin P. Chamberlain, Thomas Markovich, and Michael M. Bronstein. 2022. Graph Neural Networks as Gradient Flows.” arXiv.
Galy-Fajou, Théo, Valerio Perrone, and Manfred Opper. 2021. Flexible and Efficient Inference with Particles for the Variational Gaussian Approximation.” Entropy 23 (8): 990.
Garbuno-Inigo, Alfredo, Franca Hoffmann, Wuchen Li, and Andrew M. Stuart. 2020. Interacting Langevin Diffusions: Gradient Structure and Ensemble Kalman Sampler.” SIAM Journal on Applied Dynamical Systems 19 (1): 412–41.
Hinze, Annika, Jörgen Lantz, Sharon R. Hill, and Rickard Ignell. 2021. Mosquito Host Seeking in 3D Using a Versatile Climate-Controlled Wind Tunnel System.” Frontiers in Behavioral Neuroscience 15 (March): 643693.
Hochreiter, Sepp, Yoshua Bengio, Paolo Frasconi, and Jürgen Schmidhuber. 2001. Gradient Flow in Recurrent Nets: The Difficulty of Learning Long-Term Dependencies.” In A Field Guide to Dynamical Recurrent Neural Networks. IEEE Press.
Liu, Qiang. 2016. Stein Variational Gradient Descent: Theory and Applications,” 6.
———. 2017. Stein Variational Gradient Descent as Gradient Flow.” arXiv.
Ljung, Lennart, Georg Pflug, and Harro Walk. 1992. Stochastic Approximation and Optimization of Random Systems. Basel: Birkhäuser.
Mandt, Stephan, Matthew D. Hoffman, and David M. Blei. 2017. Stochastic Gradient Descent as Approximate Bayesian Inference.” JMLR, April.
Schillings, Claudia, and Andrew M. Stuart. 2017. Analysis of the Ensemble Kalman Filter for Inverse Problems.” SIAM Journal on Numerical Analysis 55 (3): 1264–90.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.