Learning with conservation laws, invariances and symmetries
April 11, 2020 — February 25, 2022
Learning in complicated systems where we know that there is a conservation law in effect. Or, more advanced, learning a conservation law that we did not know was in effect. As seen in especially ML for physics. Possibly this is the same idea as learning on manifolds, but the literatures do not seem to be closely connected. This is not AFAIK a particular challenge in traditional parametric statistics where we can usually impose conservation laws on a problem through the likelihood, but in nonparametric models, or models with overparameterisation such as neural nets this can get fiddly. Where does conservation of mass, momentum, energy etc reside in a convnet? Or what if we do not need to conserve a quantity exactly but wish to regularise the system towards a conservation law?
Possibly also in this category: There is a particular type of statistical-mechanical law on the learning process itself, specifically, energy-conservation in neural net signal propagation, which is not a conservation law in the regression model per se, but stability guarantee on the model and its training process. This is the deep learning as dynamical system trick.
Without trying to impose constraints on the model, there are a whole bunch of conservation laws and symmetries exploited in training procedures, for example in the potential theory, in the statistical mechanics of learning, in the use of conservation laws in Hamiltonian Monte Carlo.
AFAICT the state of the art is reviewed well in Bloem-Reddy and Teh (2020).
1 Incoming
What is Learning invariant representations trick?
Recent entrants in this area:
- Lagrangian Neural networks.
- C&C (M. Cranmer et al. 2020; Lutter, Ritter, and Peters 2019).
- permutation invariance
we demonstrate principled advantages of enforcing conservation laws of the form gφ(fθ(x))=gφ(x) by considering a special case where preimages under gφ form affine subspaces.
Noether networks look interesting.
Also related, learning with a PDE constraint.
Not sure where Dax et al. (2021) fits in.
Erik Bekkers’ seminar on Group Equivariant Deep Learning looks interesting.
2 Permutation invariance and equivariance
See deep sets.