Learning to approximate differential equations and other interpreatble physical dynamics with neural nets.
Related: Analysing a neural net itself *as* a dynamical system, which is not quite the same but crosses over, or learning general recurrent dynamics.
Variational state filters.
Where the parameters are meaningful, not just weights, we tend to think about system identification.

A deterministic version of this problem is what e.g. the famous Vector Institute Neural ODE paper (T. Q. Chen et al. 2018) did. Author Duvenaud argues that in some ways the hype ran away with the Neural ODE paper, and credits CasADI with the innovations.

There are various laypersons’ introductions/ tutorials in this area, including the simple and practical magical take in julia. See also the CASADI example.

Learning an ODE in particular a purely deterministic process, feels unsatisfying; We want a model which encodes responses,and effects to interactions. It is not ideal to have time series models which need to encode everything in an initial state.

Also, we would prefer models to be stochastic.
Learnable *SDEs* are probably what we want.
I’m particularly interested on
jump ODE regression.

Homework: Duvenaud again, tweeting some explanatory animations.

Note connection to reparameterization tricks, in that neural ODEs give you cheap differentiable reparameterizations.

Gu et al. (2021) unifies neural ODEs with RNNs.

## Questions

How do you do ensemble training for posterior predictives in NODEs? How do you guarantee stability in the learned dynamics?

## Recursive estimation

See recursive identification for generic theory of learning under the distribution shift induced by a moving parameter vector.

## S4

Interesting package of tools from Christopher Ré’s lab, at the intersection of recurrent networks and linear feedback systems. See HazyResearch/state-spaces: Sequence Modeling with Structured State Spaces. I find these aesthetically satisfying, because I spent 2 years of my PhD trying to solve the same problem, and failed. These folks did a better job, so I find it slightly validating that the idea was not stupid.

## Incoming

- google-research/torchsde: Differentiable SDE solvers with GPU support and efficient sensitivity analysis. (Kidger et al. 2021; X. Li et al. 2020)
- Patrick Kidger’s thesis is the current canonical textbook on ODE learning (Kidger 2022).
- Corenflos et al. (2021) describe an optimal transport method
- Campbell et al. (2021) describes variational inference that factors out the unknown parameters.

## References

*Mathematical Programming Computation*11 (1): 1–36.

*Acta Numerica*28 (May): 1–174.

*Proceedings of the National Academy of Sciences*111 (52): 18507–12.

*arXiv:1812.05916 [Math, q-Fin, Stat]*, January.

*Advances in Neural Information Processing Systems*. Vol. 32. Curran Associates, Inc.

*PLOS ONE*11 (2): e0150171.

*Proceedings of ICLR*.

*PRoceedings of ICLR*.

*Nature Computational Science*2 (7): 433–42.

*Advances in Neural Information Processing Systems*. Vol. 32. Curran Associates, Inc.

*Advances in Neural Information Processing Systems 31*, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, 6572–83. Curran Associates, Inc.

*Advances in Neural Information Processing Systems*. Vol. 33.

*arXiv:2102.07850 [Cs, Stat]*, June.

*Advances in Neural Information Processing Systems*. Vol. 33.

*arXiv:1904.01681 [Cs, Stat]*, April.

*Communications in Mathematics and Statistics*5 (1): 1–11.

*Notices of the American Mathematical Society*68 (04): 1.

*arXiv:1807.01083 [Cs, Math]*, July.

*Scandinavian Journal of Statistics*n/a (n/a).

*ICML*, 14.

*Advances in Neural Information Processing Systems*. Vol. 33.

*arXiv:1807.01613 [Cs, Stat]*, July, 10.

*arXiv:1902.10298 [Cs]*, February.

*Advances in Neural Information Processing Systems*. Vol. 33.

*arXiv:2007.04154 [Cs, q-Fin, Stat]*, July.

*arXiv:1810.01367 [Cs, Stat]*, October.

*Advances in Neural Information Processing Systems*, 34:572–85. Curran Associates, Inc.

*arXiv:1805.08034 [Cs, Math]*, May.

*Proceedings of the National Academy of Sciences*115 (34): 8505–10.

*IMA Note*.

*Nature Machine Intelligence*4 (11): 992–1003.

*arXiv:2006.04439 [Cs, Stat]*, December.

*PRoceedings of ICLR*.

*Advances in Neural Information Processing Systems*. Vol. 33.

*arXiv:1812.04300 [Math, Stat]*, December.

*Advances in Neural Information Processing Systems 32*, edited by H. Wallach, H. Larochelle, A. Beygelzimer, F. d Alché-Buc, E. Fox, and R. Garnett, 9847–58. Curran Associates, Inc.

*Advances in Neural Information Processing Systems*. Vol. 33.

*Proceedings of the 38th International Conference on Machine Learning*, 5443–52. PMLR.

*Proceedings of the 38th International Conference on Machine Learning*, 5453–63. PMLR.

*arXiv:2005.08926 [Cs, Stat]*, November.

*Machine Learning and the Physical Sciences Workshop at the 33rd Conference on Neural Information Processing Systems (NeurIPS)*, 7.

*Advances in Neural Information Processing Systems*, 9.

*arXiv:2007.14823 [Cond-Mat, Physics:nlin, q-Bio]*.

*Advances in Neural Information Processing Systems*. Vol. 33.

*International Conference on Artificial Intelligence and Statistics*, 3870–82. PMLR.

*Advances in Neural Information Processing Systems*. Vol. 33.

*Advances in Neural Information Processing Systems*. Vol. 32. Curran Associates, Inc.

*arXiv:1910.03193 [Cs, Stat]*, April.

*Advances in Neural Information Processing Systems*. Vol. 33.

*Nature Communications*9 (1): 4950.

*arXiv:2003.08063 [Cs, Math, Stat]*, March.

*arXiv:2002.08071 [Cs, Stat]*.

*PMLR*, 2401–9.

*arXiv:2109.00173 [Cs, Stat]*, August.

*arXiv:1904.12933 [Quant-Ph, Stat]*, April.

*arXiv:1905.10437 [Cs, Stat]*, February.

*Bulletin of the American Mathematical Society*80 (3): 503–5.

*Workshop on Bayesian Deep LEarning*, 7.

*International Conference on Artificial Intelligence and Statistics*, 1126–36. PMLR.

*Advances in Neural Information Processing Systems*. Vol. 33.

*arXiv:2009.09346 [Cs]*, September.

*arXiv:1812.01892 [Cs]*, December.

*arXiv.org*, January.

*ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*, 3905–9.

*arXiv:1905.12090 [Cs, Stat]*, May.

*Journal of Mathematical Imaging and Vision*62 (3): 352–64.

*arXiv:1910.09349 [Cs, Stat]*, March.

*Proceedings of the 37th International Conference on Machine Learning*, 8459–68. PMLR.

*arXiv:2103.10153 [Cs, Stat]*, June.

*arXiv:2012.08405 [Cs, Eess]*, December.

*CoRR*abs/2006.09313.

*arXiv:1906.10264 [Cs, Stat]*, June.

*bioRxiv*, February, 272005.

*Physics-Based Deep Learning*. WWW.

*Proceedings of the Web Conference 2021*, 730–42. Ljubljana Slovenia: ACM.

*arXiv:1905.09883 [Cs, Stat]*, October.

*PMLR*, 3570–78.

*arXiv:1805.08349 [Cond-Mat, Stat]*, October.

*SIAM Journal on Scientific Computing*42 (1): A292–317.

*arXiv:1905.10994 [Cs, Stat]*, October.

*Spatial Statistics*37 (June): 100408.

*arXiv:1907.12998 [Cs, Stat]*, February.

*International Conference on Machine Learning*, 27060–74. PMLR.

## No comments yet. Why not leave one?