Learning with conservation laws, invariances and symmetries



Failure of conservation of mass at system boundaries is a common problem in models with nonparametric likelihood

Learning in complicated systems where we know that there is a conservation law in effect. Or, more advanced, learning a conservation law that we did not know was in effect. As seen in especially ML for physics. This is not AFAIK a particular challenge in traditional parametric statistics where we can usually impose conservation laws on a problem through the likelihood, but in nonparametric models, or models with overparameterisation such as neural nets this can get fiddly. Where does conservation of mass, momentum, energy etc reside in a convnet? Or what if we do not need to conserve a quantity exactly but which to regularise the system towards a conservation law?

POssibly also in this category: There is a particular type of statistical-mechanical law on the learning process itself, specifically, energy-conservation in neural net signal propagation, which is not a conservation law in the regression model per se, but stability guarantee on the model and its traiing process. This is the deep learning as dynamical system trick.

Without trying to impose constraints on the model, there are a whole bunch of conservation laws and symmetries exploited in training procedures, for example in the potential theory, in the statistical mechanics of learning, in the use of conservation laws in Hamiltonian Monte Carlo.

Incoming

What is Learning invariant representations trick?

Recent entrants in this area:

we demonstrate principled advantages of enforcing conservation laws of the form gφ(fθ(x))=gφ(x) by considering a special case where preimages under gφ form affine subspaces.

Noether networks look interesting.

Also related, learning with a PDE constraint.

Not sure where Dax et al. (2021) fits in.

Erik Bekkers’ seminar on Group Equivariant Deep Learning looks interesting.

Permutation invariance and equivariance

See deep sets.

References

Azangulov, Iskander, Andrei Smolensky, Alexander Terenin, and Viacheslav Borovitskiy. 2022. Stationary Kernels and Gaussian Processes on Lie Groups and Their Homogeneous Spaces I: The Compact Case.” arXiv.
Chen, Boyuan, Kuang Huang, Sunand Raghupathi, Ishaan Chandratreya, Qiang Du, and Hod Lipson. 2022. Automated Discovery of Fundamental Variables Hidden in Experimental Data.” Nature Computational Science 2 (7): 433–42.
Cranmer, Miles D, Rui Xu, Peter Battaglia, and Shirley Ho. 2019. “Learning Symbolic Physics with Graph Networks.” In Machine Learning and the Physical Sciences Workshop at the 33rd Conference on Neural Information Processing Systems (NeurIPS), 6.
Cranmer, Miles, Sam Greydanus, Stephan Hoyer, Peter Battaglia, David Spergel, and Shirley Ho. 2020. Lagrangian Neural Networks.” arXiv:2003.04630 [Physics, Stat], July.
Dax, Maximilian, Stephen R. Green, Jonathan Gair, Michael Deistler, Bernhard Schölkopf, and Jakob H. Macke. 2021. Group Equivariant Neural Posterior Estimation.” arXiv:2111.13139 [Astro-Ph, Physics:gr-Qc, Stat], November.
Erichson, N Benjamin, Michael Muehlebach, and Michael W Mahoney. 2019. “Physics-Informed Autoencoders for Lyapunov-Stable Fluid Flow Prediction.” In Machine Learning and the Physical Sciences Workshop at the 33rd Conference on Neural Information Processing Systems (NeurIPS), 6.
Głuch, Grzegorz, and Rüdiger Urbanke. 2021. Noether: The More Things Change, the More Stay the Same.” arXiv:2104.05508 [Cs, Stat], April.
Greydanus, Samuel, Misko Dzamba, and Jason Yosinski. 2019. Hamiltonian Neural Networks.” In Advances in Neural Information Processing Systems 32, edited by H. Wallach, H. Larochelle, A. Beygelzimer, F. d Alché-Buc, E. Fox, and R. Garnett, 15379–89. Curran Associates, Inc.
Krämer, Andreas, Jonas Köhler, and Frank Noé. 2020. Training Invertible Linear Layers Through Rank-One Perturbations.” In arXiv:2010.07033 [Physics, Stat].
Lattner, Stefan, Monika Dorfler, and Andreas Arzt. 2019. Learning Complex Basis Functions for Invariant Representations of Audio.” In Proceedings of the 20th Conference of the International Society for Music Information Retrieval, 8.
Lutter, Michael, Christian Ritter, and Jan Peters. 2019. Deep Lagrangian Networks: Using Physics as Model Prior for Deep Learning.” arXiv:1907.04490 [Cs, Eess, Stat], July.
Mohan, Arvind T., Nicholas Lubbers, Daniel Livescu, and Michael Chertkov. 2020. Embedding Hard Physical Constraints in Neural Network Coarse-Graining of 3D Turbulence.” arXiv:2002.00021 [Physics], February.
Mototake, Yoh-ichi. 2019. “Conservation Law Estimation by Extracting the Symmetry of a Dynamical System Using a Deep Neural Network.” In Machine Learning and the Physical Sciences Workshop at the 33rd Conference on Neural Information Processing Systems (NeurIPS), 8.
Popov, Andrey Anatoliyevich. 2022. Combining Data-Driven and Theory-Guided Models in Ensemble Data Assimilation.” ETD. Virginia Tech.
Raissi, Maziar, and George Em Karniadakis. 2017. Machine Learning of Linear Differential Equations Using Gaussian Processes.” arXiv:1701.02440 [Cs, Math, Stat], January.
Raissi, Maziar, P. Perdikaris, and George Em Karniadakis. 2019. Physics-Informed Neural Networks: A Deep Learning Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations.” Journal of Computational Physics 378 (February): 686–707.
Rezende, Danilo J, Sébastien Racanière, Irina Higgins, and Peter Toth. 2019. “Equivariant Hamiltonian Flows.” In Machine Learning and the Physical Sciences Workshop at the 33rd Conference on Neural Information Processing Systems (NeurIPS), 6.
Sanchez-Gonzalez, Alvaro, Victor Bapst, Peter Battaglia, and Kyle Cranmer. 2019. “Hamiltonian Graph Networks with ODE Integrators.” In Machine Learning and the Physical Sciences Workshop at the 33rd Conference on Neural Information Processing Systems (NeurIPS), 11.
Thickstun, John, Zaid Harchaoui, Dean P. Foster, and Sham M. Kakade. 2018. Invariances and Data Augmentation for Supervised Music Transcription.” In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2241–45.
Wang, Rui, Robin Walters, and Rose Yu. 2022. Data Augmentation Vs. Equivariant Networks: A Theory of Generalization on Dynamics Forecasting.” arXiv.
Zhu, Yinhao, Nicholas Zabaras, Phaedon-Stelios Koutsourelakis, and Paris Perdikaris. 2019. Physics-Constrained Deep Learning for High-Dimensional Surrogate Modeling and Uncertainty Quantification Without Labeled Data.” Journal of Computational Physics 394 (October): 56–81.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.