# Learning with conservation laws, invariances and symmetries

April 11, 2020 — February 25, 2022

Learning in complicated systems where we know that there is a conservation law in effect. Or, more advanced, learning a conservation law that we did not know was in effect. As seen in especially ML for physics. Possibly this is the same idea as learning on manifolds, but the literatures do not seem to be closely connected. This is not AFAIK a particular challenge in traditional parametric statistics where we can usually impose conservation laws on a problem through the likelihood, but in nonparametric models, or models with overparameterisation such as neural nets this can get fiddly. Where does conservation of mass, momentum, energy etc reside in a convnet? Or what if we do not need to conserve a quantity exactly but which to regularise the system towards a conservation law?

Possibly also in this category: There is a particular type of statistical-mechanical law on the learning process itself, specifically, energy-conservation in neural net signal propagation, which is not a conservation law in the regression model *per se*, but stability guarantee on the model and its traiing process. This is the deep learning as dynamical system trick.

Without trying to impose constraints on the model, there are a whole bunch of conservation laws and symmetries exploited in *training procedures*, for example in the potential theory, in the statistical mechanics of learning, in the use of conservation laws in Hamiltonian Monte Carlo.

AFAICT the state of the art is reviewed well in Bloem-Reddy and Teh (2020).

## 1 Incoming

What is Learning invariant representations trick?

Recent entrants in this area:

- Lagrangian Neural networks.
- C&C (M. Cranmer et al. 2020; Lutter, Ritter, and Peters 2019).
- permutation invariance

we demonstrate principled advantages of enforcing conservation laws of the form gφ(fθ(x))=gφ(x) by considering a special case where preimages under gφ form affine subspaces.

Noether networks look interesting.

Also related, learning with a PDE constraint.

Not sure where Dax et al. (2021) fits in.

Erik Bekkers’ seminar on Group Equivariant Deep Learning looks interesting.

## 2 Permutation invariance and equivariance

See deep sets.

## 3 References

*Nature Computational Science*.

*arXiv:2003.04630 [Physics, Stat]*.

*Machine Learning and the Physical Sciences Workshop at the 33rd Conference on Neural Information Processing Systems (NeurIPS)*.

*arXiv:2111.13139 [Astro-Ph, Physics:gr-Qc, Stat]*.

*Machine Learning and the Physical Sciences Workshop at the 33rd Conference on Neural Information Processing Systems (NeurIPS)*.

*arXiv:2104.05508 [Cs, Stat]*.

*Advances in Neural Information Processing Systems 32*.

*Advances in Neural Information Processing Systems*.

*arXiv:2010.07033 [Physics, Stat]*.

*Proceedings of the 20th Conference of the International Society for Music Information Retrieval*.

*arXiv:1907.04490 [Cs, Eess, Stat]*.

*arXiv:2002.00021 [Physics]*.

*Machine Learning and the Physical Sciences Workshop at the 33rd Conference on Neural Information Processing Systems (NeurIPS)*.

*arXiv:1701.02440 [Cs, Math, Stat]*.

*Journal of Computational Physics*.

*Journal of Mathematical Imaging and Vision*.

*2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*.

*Journal of Computational Physics*.