Andersson, Joel A. E., Joris Gillis, Greg Horn, James B. Rawlings, and Moritz Diehl. 2019.
βCasADi: A Software Framework for Nonlinear Optimization and Optimal Control.β Mathematical Programming Computation 11 (1): 1β36.
Anil, Cem, James Lucas, and Roger Grosse. 2018.
βSorting Out Lipschitz Function Approximation,β November.
Arjovsky, Martin, Amar Shah, and Yoshua Bengio. 2016.
βUnitary Evolution Recurrent Neural Networks.β In
Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, 1120β28. ICMLβ16. New York, NY, USA: JMLR.org.
Babtie, Ann C., Paul Kirk, and Michael P. H. Stumpf. 2014.
βTopological Sensitivity Analysis for Systems Biology.β Proceedings of the National Academy of Sciences 111 (52): 18507β12.
Brandstetter, Johannes, Rianne van den Berg, Max Welling, and Jayesh K. Gupta. 2022.
βClifford Neural Layers for PDE Modeling.β In.
Chandramoorthy, Nisha, Andreas Loukas, Khashayar Gatmiry, and Stefanie Jegelka. 2022.
βOn the Generalization of Learning Algorithms That Do Not Converge.β arXiv.
Chang, Bo, Lili Meng, Eldad Haber, Lars Ruthotto, David Begert, and Elliot Holtham. 2018.
βReversible Architectures for Arbitrarily Deep Residual Neural Networks.β In
arXiv:1709.03698 [Cs, Stat].
Chen, Chong, Yixuan Dou, Jie Chen, and Yaru Xue. 2022.
βA Novel Neural Network Training Framework with Data Assimilation.β The Journal of Supercomputing, June.
Chen, Tian Qi, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. 2018.
βNeural Ordinary Differential Equations.β In
Advances in Neural Information Processing Systems 31, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, 6572β83. Curran Associates, Inc.
Chen, Tianqi, Ian Goodfellow, and Jonathon Shlens. 2015.
βNet2Net: Accelerating Learning via Knowledge Transfer.β arXiv:1511.05641 [Cs], November.
Choromanski, Krzysztof, Jared Quincy Davis, Valerii Likhosherstov, Xingyou Song, Jean-Jacques Slotine, Jacob Varley, Honglak Lee, Adrian Weller, and Vikas Sindhwani. 2020.
βAn Ode to an ODE.β In
Advances in Neural Information Processing Systems. Vol. 33.
Chou, Hung-Hsu, Holger Rauhut, and Rachel Ward. 2023.
βRobust Implicit Regularization via Weight Normalization.β arXiv.
Course, Kevin, Trefor Evans, and Prasanth Nair. 2020.
βWeak Form Generalized Hamiltonian Learning.β In
Advances in Neural Information Processing Systems. Vol. 33.
E, Weinan. 2017.
βA Proposal on Machine Learning via Dynamical Systems.β Communications in Mathematics and Statistics 5 (1): 1β11.
βββ. 2021.
βThe Dawning of a New Era in Applied Mathematics.β Notices of the American Mathematical Society 68 (04): 1.
E, Weinan, Jiequn Han, and Qianxiao Li. 2018.
βA Mean-Field Optimal Control Formulation of Deep Learning.β arXiv:1807.01083 [Cs, Math], July.
E, Weinan, Chao Ma, and Lei Wu. 2020.
βMachine Learning from a Continuous Viewpoint, I.β Science China Mathematics 63 (11): 2233β66.
Galimberti, Clara, Luca Furieri, Liang Xu, and Giancarlo Ferrari-Trecate. 2021.
βNon Vanishing Gradients for Arbitrarily Deep Neural Networks: A Hamiltonian System Approach.β In.
GΕuch, Grzegorz, and RΓΌdiger Urbanke. 2021.
βNoether: The More Things Change, the More Stay the Same.β arXiv:2104.05508 [Cs, Stat], April.
Grohs, Philipp, and Lukas Herrmann. 2022.
βDeep Neural Network Approximation for High-Dimensional Elliptic PDEs with Boundary Conditions.β IMA Journal of Numerical Analysis 42 (3): 2055β82.
Gu, Albert, Isys Johnson, Karan Goel, Khaled Saab, Tri Dao, Atri Rudra, and Christopher RΓ©. 2021.
βCombining Recurrent, Convolutional, and Continuous-Time Models with Linear State Space Layers.β In
Advances in Neural Information Processing Systems, 34:572β85. Curran Associates, Inc.
Haber, Eldad, Keegan Lensink, Eran Treister, and Lars Ruthotto. 2019.
βIMEXnet A Forward Stable Deep Neural Network.β In
International Conference on Machine Learning, 2525β34. PMLR.
Haber, Eldad, and Lars Ruthotto. 2018.
βStable Architectures for Deep Neural Networks.β Inverse Problems 34 (1): 014004.
Haber, Eldad, Lars Ruthotto, Elliot Holtham, and Seong-Hwan Jun. 2017.
βLearning Across Scales - A Multiscale Method for Convolution Neural Networks.β arXiv:1703.02009 [Cs], March.
Han, Jiequn, Arnulf Jentzen, and Weinan E. 2018.
βSolving High-Dimensional Partial Differential Equations Using Deep Learning.β Proceedings of the National Academy of Sciences 115 (34): 8505β10.
Hardt, Moritz, Benjamin Recht, and Yoram Singer. 2015.
βTrain Faster, Generalize Better: Stability of Stochastic Gradient Descent.β arXiv:1509.01240 [Cs, Math, Stat], September.
Hayou, Soufiane, Arnaud Doucet, and Judith Rousseau. 2019.
βOn the Impact of the Activation Function on Deep Neural Networks Training.β In
Proceedings of the 36th International Conference on Machine Learning, 2672β80. PMLR.
Hayou, Soufiane, Jean-Francois Ton, Arnaud Doucet, and Yee Whye Teh. 2020.
βPruning Untrained Neural Networks: Principles and Analysis.β arXiv:2002.08797 [Cs, Stat], June.
He, Junxian, Daniel Spokoyny, Graham Neubig, and Taylor Berg-Kirkpatrick. 2019.
βLagging Inference Networks and Posterior Collapse in Variational Autoencoders.β In
PRoceedings of ICLR.
Huh, In, Eunho Yang, Sung Ju Hwang, and Jinwoo Shin. 2020.
βTime-Reversal Symmetric ODE Network.β In
Advances in Neural Information Processing Systems. Vol. 33.
Jing, Li, Yichen Shen, Tena Dubcek, John Peurifoy, Scott Skirlo, Yann LeCun, Max Tegmark, and Marin SoljaΔiΔ. 2017.
βTunable Efficient Unitary Neural Networks (EUNN) and Their Application to RNNs.β In
PMLR, 1733β41.
Kidger, Patrick. 2022.
βOn Neural Differential Equations.β Oxford.
Kolter, J Zico, and Gaurav Manek. 2019.
βLearning Stable Deep Dynamics Models.β In
Advances in Neural Information Processing Systems, 9.
Kovachki, Nikola B., and Andrew M. Stuart. 2019.
βEnsemble Kalman Inversion: A Derivative-Free Technique for Machine Learning Tasks.β Inverse Problems 35 (9): 095005.
Lawrence, Nathan, Philip Loewen, Michael Forbes, Johan Backstrom, and Bhushan Gopaluni. 2020.
βAlmost Surely Stable Deep Dynamics.β In
Advances in Neural Information Processing Systems. Vol. 33.
Long, Zichao, Yiping Lu, Xianzhong Ma, and Bin Dong. 2018.
βPDE-Net: Learning PDEs from Data.β In
Proceedings of the 35th International Conference on Machine Learning, 3208β16. PMLR.
Massaroli, Stefano, Michael Poli, Michelangelo Bin, Jinkyoo Park, Atsushi Yamashita, and Hajime Asama. 2020.
βStable Neural Flows.β arXiv:2003.08063 [Cs, Math, Stat], March.
Meng, Qi, Yue Wang, Wei Chen, Taifeng Wang, Zhi-Ming Ma, and Tie-Yan Liu. 2016.
βGeneralization Error Bounds for Optimization Algorithms via Stability.β In
arXiv:1609.08397 [Stat], 10:441β74.
Mhammedi, Zakaria, Andrew Hellicar, Ashfaqur Rahman, and James Bailey. 2017.
βEfficient Orthogonal Parametrisation of Recurrent Neural Networks Using Householder Reflections.β In
PMLR, 2401β9.
Nguyen, Long, and Andy Malinsky. 2020. βExploration and Implementation of Neural Ordinary Diο¬erential Equations,β 34.
Niu, Murphy Yuezhen, Lior Horesh, and Isaac Chuang. 2019.
βRecurrent Neural Networks in the Eye of Differential Equations.β arXiv:1904.12933 [Quant-Ph, Stat], April.
Opschoor, Joost A. A., Philipp C. Petersen, and Christoph Schwab. 2020.
βDeep ReLU Networks and High-Order Finite Element Methods.β Analysis and Applications 18 (05): 715β70.
Poli, Michael, Stefano Massaroli, Atsushi Yamashita, Hajime Asama, and Jinkyoo Park. 2020.
βHypersolvers: Toward Fast Continuous-Depth Models.β In
Advances in Neural Information Processing Systems. Vol. 33.
Rackauckas, Christopher, Yingbo Ma, Vaibhav Dixit, Xingjian Guo, Mike Innes, Jarrett Revels, Joakim Nyberg, and Vijay Ivaturi. 2018.
βA Comparison of Automatic Differentiation and Continuous Sensitivity Analysis for Derivatives of Differential Equation Solutions.β arXiv:1812.01892 [Cs], December.
Ray, Deep, Orazio Pinti, and Assad A. Oberai. 2023.
βDeep Learning and Computational Physics (Lecture Notes).βRoberts, Daniel A., Sho Yaida, and Boris Hanin. 2021.
βThe Principles of Deep Learning Theory.β arXiv:2106.10165 [Hep-Th, Stat], August.
Roeder, Geoffrey, Paul K. Grant, Andrew Phillips, Neil Dalchau, and Edward Meeds. 2019.
βEfficient Amortised Bayesian Inference for Hierarchical and Nonlinear Dynamical Systems.β arXiv:1905.12090 [Cs, Stat], May.
Ruhe, David, Jayesh K Gupta, Steven de Keninck, Max Welling, and Johannes Brandstetter. 2023.
βGeometric Clifford Algebra Networks.β In
arXiv Preprint arXiv:2302.06594.
Ruthotto, Lars, and Eldad Haber. 2020.
βDeep Neural Networks Motivated by Partial Differential Equations.β Journal of Mathematical Imaging and Vision 62 (3): 352β64.
Saemundsson, Steindor, Alexander Terenin, Katja Hofmann, and Marc Peter Deisenroth. 2020.
βVariational Integrator Networks for Physically Structured Embeddings.β arXiv:1910.09349 [Cs, Stat], March.
Schoenholz, Samuel S., Justin Gilmer, Surya Ganguli, and Jascha Sohl-Dickstein. 2017.
βDeep Information Propagation.β In.
ΕimΕekli, Umut, Ozan Sener, George Deligiannidis, and Murat A. Erdogdu. 2020.
βHausdorff Dimension, Stochastic Differential Equations, and Generalization in Neural Networks.β CoRR abs/2006.09313.
Venturi, Daniele, and Xiantao Li. 2022.
βThe Mori-Zwanzig Formulation of Deep Learning.β arXiv.
Vorontsov, Eugene, Chiheb Trabelsi, Samuel Kadoury, and Chris Pal. 2017.
βOn Orthogonality and Learning Recurrent Networks with Long Term Dependencies.β In
PMLR, 3570β78.
Wang, Chuang, Hong Hu, and Yue M. Lu. 2019.
βA Solvable High-Dimensional Model of GAN.β arXiv:1805.08349 [Cond-Mat, Stat], October.
Wiatowski, Thomas, and Helmut BΓΆlcskei. 2015.
βA Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction.β In
Proceedings of IEEE International Symposium on Information Theory.
Wiatowski, Thomas, Philipp Grohs, and Helmut BΓΆlcskei. 2018.
βEnergy Propagation in Deep Convolutional Neural Networks.β IEEE Transactions on Information Theory 64 (7): 1β1.
Yegenoglu, Alper, Kai Krajsek, Sandra Diaz Pier, and Michael Herty. 2020.
βEnsemble Kalman Filter Optimizing Deep Neural Networks: An Alternative Approach to Non-Performing Gradient Descent.β In
Machine Learning, Optimization, and Data Science, edited by Giuseppe Nicosia, Varun Ojha, Emanuele La Malfa, Giorgio Jansen, Vincenzo Sciacca, Panos Pardalos, Giovanni Giuffrida, and Renato Umeton, 12566:78β92. Cham: Springer International Publishing.
YΔ±ldΔ±z, ΓaΔatay, Markus Heinonen, and Harri LΓ€hdesmΓ€ki. 2019.
βODE\(^2\)VAE: Deep Generative Second Order ODEs with Bayesian Neural Networks.β arXiv:1905.10994 [Cs, Stat], October.
Zammit-Mangion, Andrew, and Christopher K. Wikle. 2020.
βDeep Integro-Difference Equation Models for Spatio-Temporal Forecasting.β Spatial Statistics 37 (June): 100408.
Zhang, Han, Xi Gao, Jacob Unterman, and Tom Arodz. 2020.
βApproximation Capabilities of Neural ODEs and Invertible Residual Networks.β arXiv:1907.12998 [Cs, Stat], February.
Zhi, Weiming, Tin Lai, Lionel Ott, Edwin V. Bonilla, and Fabio Ramos. 2022.
βLearning Efficient and Robust Ordinary Differential Equations via Invertible Neural Networks.β In
International Conference on Machine Learning, 27060β74. PMLR.
No comments yet. Why not leave one?