\(\newcommand{\solop}{\mathcal{G}^{\dagger}}\)

Using statistical or machine learning approaches to solve PDEs, and maybe even to perform inference through them.
There are many approaches to ML learning of PDEs and I will document on an *ad hoc* basis as I need them.
No claim is made to completeness.

TODO: To reduce proliferation of unclear symbols by introducing a specific example; which neural nets represent operators, which represent specific functions, between which spaces etc.

TODO: Harmonise the notation used in this section with subsections below; right now they match the papers’ notation but not each other.

TODO: should the intro section actually be filed under PDEs?

TODO: introduce a consistent notation for coordinate space, output spaces, and function space?

TODO: this is mostly Eulerian fluid flow models right now. Can we mention Lagrangian models at least?

## Background

Suppose we have a PDE defined over some input domain, which we presume is a time dimension and some number of spatial dimensions.
The PDE is specified by some differential operator \(\mathcal{D}\) and some *forcing* or *boundary condition* \(u\in \mathscr{U},\) as
\[\mathcal{D}[f]=u.\]
These functions will map from some coordinate space \(C\) to some output space \(O\).
At the moment we consider only compact sets of positive Lebesgue measure, \(C\subseteq\mathbb{R}^{d_C}\) and \(O\subseteq\mathbb{R}^{d_O}.\)
The first coordinate of the input space often has the special interpretation as time \(t\in \mathbb{R}\) and the subsequent coordinates are then spatial coordinate \(x\in D\subseteq \mathbb{R}^{d_{D}}\) where \(d_{D}=d_{C}-1.\)
Sometimes we make this explicit by writing the time coordinate separately as \(f(t,x).\)
A common case, concretely, is \(C=\mathbb{R} \times \mathbb{R}^2=\mathbb{R} \times D\) and \(O=\mathbb{R}.\)
For each time \(t\in \mathbb{R}\) we assume the instantaneous solution \(f(t, \cdot)\) to be an element of some Banach space \(f\in \mathscr{A}\) of functions \(f(t, \cdot): D\to O.\)
The overall solutions \(f: C\to O\) have their own Banach space \(\mathscr{F}\).
More particularly, we might consider solutions a restricted time domain \(t\in [0,T]\) and some spatial domain \(D\subseteq \mathbb{R}^2\) where a solution is a function \(f\) that maps \([0,T] \times D \to \mathbb{R}.\)
This would naturally model, say, a 2D height-field evolving over time.

We have thrown the term *Banach space* about without making it clear which one we mean.
There are usually some implied smoothness properties and of course we would want to include some kind of metric to fully specify these spaces, but we gloss over that for now.

We have introduced one operator, the defining operator \(\mathcal{D}\) .
Another that we think about a lot is the *PDE propagator* or *forward operator* \(\mathcal{P}_s,\) which produces a representation of the entire solution surface at some future moment, given current and boundary conditions.
\[\mathcal{P}_s[f(t, \cdot)]=f( t+s, \cdot).\]
We might also discuss a *solution operator*
\[\solop:\begin{array}{l}\mathscr{U}\to\mathscr{F}\\ u\mapsto f\end{array}\]
such that
\[\mathcal{D}\left[\solop[u]\right]=u.\]

Handling all these weird, and presumably infinite-dimensional, function spaces \(\mathscr{A},\mathscr{U},\mathscr{F},\dots\) on a finite computer requires use to introduce a notion of *discretisation*.
We need to find some finite-dimensional representations of these functions so that they can be computed in a finite machine.
PDE solvers use various tricks to do that, and each one is its own research field.
Finite difference approximations treat all the solutions as values on a grid, effectively approximating \(\mathscr{F}\) with some new space of functions \(\mathbb{Z}^2 \times \mathbb{Z} \to \mathbb{R},\) or, if you’d like, in terms of “bar chart” basis functions.
Finite element methods define the PDE over a more complicated indexing system of compactly-supported basis functions which form a mesh.
Particle systems approximate PDEs with moving particle who define their own adaptive basis.
If there is some other natural (preferably orthogonal) basis of functions on the solution surface we might use those, for example with the right structure the eigenfunctions of the defining operator might give us such a basis.
Fourier bases are famous in this case.

A classic for neural nets is to learn a finite-difference approximation of the PDE on a grid of values and treat it as a convnet regression, and indeed the dynamical treatment of neural nets is based on that. For various practical reasons I would like to avoid requiring a grid on my input values as much as possible. For one thing, grid systems are memory intensive and need expensive GPUs. For another, it is hard to integrate observations at multiple resolutions into a gridded data system. For a third, the research field of image prediction is too crowded for easy publications. Thus, that will not be treated further.

A grid-free approach is graph networks that learn a topology and interaction system.
This seems to naturally map on to PDEs of the kind that we usually solve by particle systems, e.g. fluid dynamics with immiscible substances.
Nothing wrong with this idea *per se*, but it does not seem to be the most compelling approach to me for my domain of spatiotemporal prediction where we already know the topology and can avoid all the complicated bits of graph networks.
So this I will also ignore for now.

There are a few options. For an overview of many other techniques see Physics-based Deep Learning by Philipp Holl, Maximilian Mueller, Patrick Schnell, Felix Trost, Nils Thuerey, Kiwon Um (Thuerey et al. 2021). Also, see Brunton and Kutz, Data-Driven Science and Engineering. (Brunton and Kutz 2019) covers related material; both go farther thank mere PDEs and consider general scientific settings. Also, the eeminar series by the authors of that latter book is a moving feast of the latest results in this area.

Here we look in depth mainly at two important ones.

One approach learns a network \(\hat{f}\in \mathscr{F}, \hat{f}: C \to O\) such that \(\hat{f}\approx f\) (Raissi, Perdikaris, and Karniadakis 2019).
This is the annoyingly-named implicit representation trick.
Another approach is used in networks like Li, Kovachki, Azizzadenesheli, Liu, Bhattacharya, et al. (2020b) which learn the forward operator \(\mathcal{P}_1: \mathscr{A}\to\mathscr{A}.\)
When the papers mentioned talk about *operator learning*, this is the operator that they seem to mean per default.

This entire idea might seem weird if you are used to typical ML research. Unlike the usual neural network setting, we start by not trying to solve a statistical inference problem, where we have to learn an unknown prediction function from data, but we have a partially or completely known function (PDE solver) that we are trying to approximate with a more convenient substitute (a neural approximation to that PDE solver).

That approximant is not necessarily exciting as a PDE solver, in itself. Probably we could have implemented the reference PDE solver on the GPU, or tweaked it a little, and got a faster PDE solver. Identifying when we have a non-trivial speed benefit from training a Neuyral net to do a thing is a whole project in itself.

However, I would like it if the reference solvers were easier to differentiate through, and to construct posteriors with - what you might call tomography, or inverse problems.
But note that we *still* do not need to use ML methods to day that.
In fact, if I already know the PDE operator and am implementing it in any case, I could avoid the learning step and simply implement the PDE using an off-the-shelf differentiable solver, which would allow us to perform this inference.

Nonetheless, we might wish to learn to approximate a PDE, for whatever reason. Perhaps we do not know the governing equations precisely, or something like that. In my case it is that am required to match an industry-standard black-box solver that is not flexible, which is a common reason. YMMV.

There are several approaches to learning the dynamics of a PDE solver for given parameters.

## Neural operator

Learning to predict the *next step given this step*.
Think *image-to-image regression*.
A whole topic in itself.
See Neural operators.

## The PINN lineage

This body of literature encompasses both *DeepONet* (‘operator learning’) and *PINN* (‘physics informed neural nets’) approaches.
Distinctions TBD.

See PINNs.

## Neural operator

Learning to predict the *next step given this step*.
Think *image-to-image regression*.
A whole topic in itself.
See Neural operators.

## Message passing methods

TBD

## DeepONet

From the people who brought you PINN, above, comes the paper of Lu, Jin, and Karniadakis (2020). The setup is related, but AFAICT differs in a few ways in that

- we don’t (necessarily?) use the derivative information at the sensor locations
- we learn an operator mapping initial/latent conditions to output functions
- we decompose the input function space into a basis and them sample randomly from the bases in order to span (in some sense) the input space at training time

The authors argue they have found a good topology for a network that does this

A DeepONet consists of two sub-networks, one for encoding the input function at a fixed number of sensors \(x_i, i = 1, \dots, m\) (branch net), and another for encoding the locations for the output functions (trunk net).

This addresses some problems with generalisation that make the PINN setup seem unsatisfactory; in particular we can change the inputs, or project arbitrary inputs forward.

The boundary conditions and input points appear to stay fixed though, and inference of the unknowns is still vexed.

🏗️

## GAN approaches

One approach I am less familiar with advocates for conditional GAN models to simulate conditional latent distributions. I’m curious about these but they look more computationally expensive and specific than I need at the moment, so I’m filing for later (G. Bao et al. 2020; Yang, Zhang, and Karniadakis 2020; Zang et al. 2020).

A recent examples from fluid-flow dynamics (Chu et al. 2021) has particularly beautiful animations attached:

## Advection-diffusion PDEs in particular

F. Sigrist, Künsch, and Stahel (2015b) finds a nice spectral representation of certain classes of stochastic PDE. These are extended in Liu, Yeo, and Lu (2020) to non-stationary operators. By being less generic, these come out with computationally convenient spectral representations.

## As implicit representations

Many of these PDE methods effectively use the “implicit representation” trick, i.e. they produce networks that map from input coordinates to values of solutions at those coordinates. This means we share some interesting tools with those networks, such as position encodings. TBD.

## Differentiable solvers

Suppose we are keen to devise yet another method that will do clever things to augment PDE solvers with ML somehow.
To that end it would be nice to have a PDE solver that was not a completely black box but which we could interrogate for useful gradients.
Obviously all PDE solvers *use* gradient information, but only some of them expose that to us as users;
e.g. MODFLOW will give me a solution filed but not the gradients of the field that were used to calculate that gradient.
In ML toolkits accessing this information is easy.

TODO: define adjoint method etc.

OTOH, there is a lot of sophisticatd work done by PDE solvers that is hard for ML toolkits to recreate. That is why PDE solvers are a thing.

Tools which combine both worlds, PDE solutions and ML optimisations, do exist; there are adjoint method systems for mainstream PDE solvers just as there are PDE solvers for ML frameworks. Let us list some of the options under differentiable PDE solvers.

## Deep Ritz method

Fits here? (E, Han, and Jentzen 2017; E and Yu 2018; Müller and Zeinhofer 2020)

## Datasets and training harnesses

As with more typical nerual net applications, PDE emulators can be trained from datasets. Here are some

- pdebench/PDEBench: PDEBench: An Extensive Benchmark for Scientific Machine Learning (Takamoto et al. 2022) (Disclaimer: I contributed significantly to this project)
- karlotness/nn-benchmark: An extensible benchmark suite to evaluate data-driven physical simulation (Otness et al. 2021)

But if we have a simulator, we can run it *live* and generate data on the fly.
Here is a tool to facilitate that.

Melissa is a file avoiding, fault tolerant and elastic framework, to run large scale sensitivity analysis (Melissa-SA) and large scale deep surrogate training (Melissa-DL) on supercomputers. With Melissa-SA, largest runs so far involved up to 30k core, executed 80 000 parallel simulations, and generated 288 TB of intermediate data that did not need to be stored on the file system …

Classical sensitivity analysis and deep surrogate training consist in running different instances of a simulation with different set of input parameters, store the results to disk to later read them back to train a Neural Network or to compute the required statistics. The amount of storage needed can quickly become overwhelming, with the associated long read time that makes data processing time consuming. To avoid this pitfall, scientists reduce their study size by running low resolution simulations or down-sampling output data in space and time.

Melissa (Fig. 1) bypasses this limitation by avoiding intermediate file storage. Melissa processes the data online (in transit) enabling very large scale data processing:

## Incoming

TorchPhysics is a Python library of (mesh-free) deep learning methods to solve differential equations. You can use TorchPhysics e.g. to

- solve ordinary and partial differential equations
- train a neural network to approximate solutions for different parameters
- solve inverse problems and interpolate external data

The following approaches are implemented using high-level concepts to make their usage as easy as possible:

- physics-informed neural networks (PINN)
- QRes
- the Deep Ritz method
- DeepONets and Physics-Informed DeepONets

NVIDIA’s MODULUS (formerly SimNet) needs filing (Hennigh et al. 2020).

- Modulus
- NVIDIA Announces Modulus: A Framework for Developing Physics ML Models for Digital Twins
- NVIDIA Creates Framework for AI to Learn Physics

They are implementing many popular algorithms, but with a comically clunky distribution system and onerous licensing. Have not yet made time to explore.

## References

*arXiv:2005.12998 [Math]*, January.

*SIAM Journal on Scientific Computing*38 (1): A243–72.

*Acta Numerica*30 (May): 1–86.

*Proceedings of The 28th Conference on Learning Theory*, 40:113–49. Paris, France: PMLR.

*Inverse Problems*36 (11): 115003.

*Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence*, 118–28. PMLR.

*Proceedings of the National Academy of Sciences*116 (31): 15344–49.

*AIAA SCITECH 2022 Forum*. American Institute of Aeronautics and Astronautics.

*Journal of Nonlinear Science*29 (4): 1563–1619.

*arXiv:2203.13760 [Physics]*, March.

*arXiv:2005.03180 [Cs, Math, Stat]*, May.

*GAMM-Mitteilungen*44 (2): e202100006.

*International Conference on Learning Representations*.

*Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control*. Cambridge: Cambridge University Press.

*ACM Transactions on Graphics*40 (4): 1–13.

*Machine Learning and the Physical Sciences Workshop at the 33rd Conference on Neural Information Processing Systems (NeurIPS)*, 6.

*arXiv:2012.07244 [Cs]*, March.

*Proceedings of the National Academy of Sciences*118 (2).

*Notices of the American Mathematical Society*68 (04): 1.

*Communications in Mathematics and Statistics*5 (4): 349–80.

*arXiv:2008.13333 [Cs, Math]*, September.

*Communications in Mathematics and Statistics*6 (1): 1–12.

*Advances in Computational Mathematics*45 (5-6): 2503–32.

*Journal of Computational Physics*384 (May): 1–15.

*arXiv:2010.10876 [Cs]*, October.

*arXiv Preprint arXiv:2106.13281*.

*arXiv Preprint arXiv:2007.04954*.

*Acta Numerica*30 (May): 445–554.

*Computer Methods in Applied Mechanics and Engineering*375 (March): 113533.

*Fixed Point Theory*. Springer Monographs in Mathematics. New York, NY: Springer New York.

*arXiv:2012.11857 [Cs, Math, Stat]*, December.

*Computer Methods in Applied Mechanics and Engineering*345 (March): 75–99.

*Proceedings of the National Academy of Sciences*115 (34): 8505–10.

*arXiv:2012.07938 [Physics]*, December.

*Frontiers in Applied Mathematics and Statistics*7.

*NeurIPS Workshop*.

*ICLR*, 5.

*ACM Transactions on Graphics*38 (6): 1–16.

*arXiv:2112.05309 [Cs]*, December.

*Networks & Heterogeneous Media*15 (2): 247.

*The Journal of Machine Learning Research*17 (1): 613–66.

*Nature Reviews Physics*3 (6): 422–40.

*arXiv:2001.08055 [Physics, Stat]*, January.

*arXiv:1912.00873 [Physics, Stat]*, November.

*arXiv:1912.07443 [Physics, Stat]*, December.

*Machine Learning and the Physical Sciences Workshop at the 33rd Conference on Neural Information Processing Systems (NeurIPS)*, 7.

*Proceedings of the National Academy of Sciences*118 (21).

*arXiv:1801.07337 [Physics]*, March.

*arXiv:2107.07562 [Cs, Math]*, July.

*arXiv:2108.08481 [Cs, Math]*.

*Advances in Neural Information Processing Systems*, 34:26548–60. Curran Associates, Inc.

*IEEE Transactions on Neural Networks*9 (5): 987–1000.

*arXiv:2010.08895 [Cs, Math]*, October.

*Advances in Neural Information Processing Systems*. Vol. 33.

*Canadian Journal of Statistics*35 (4): 597–606.

*International Conference on Learning Representations*.

*Journal of the American Statistical Association*0 (0): 1–18.

*Proceedings of the 35th International Conference on Machine Learning*, 3208–16. PMLR.

*arXiv:1910.03193 [Cs, Stat]*, April.

*SIAM Review*63 (1): 208–28.

*Journal of Computational Physics*438 (August): 110361.

*Journal of Open Source Software*4 (38): 1292.

*arXiv:2111.09880 [Physics]*, November.

*Probabilistic Engineering Mechanics*57 (July): 14–25.

*The Art of Differentiating Computer Programs: An Introduction to Algorithmic Differentiation*. Society for Industrial and Applied Mathematics.

*Reliability Engineering & System Safety*106 (October): 179–90.

*SIAM Journal on Scientific Computing*38 (4): B521–38.

*Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences*471 (2179): 20150018.

*Physica D: Nonlinear Phenomena*406 (May): 132401.

*MIT Web Domain*, 6.

*Journal of Computational Physics*378 (February): 686–707.

*arXiv:2109.07573 [Physics]*, September.

*Environmental Modelling & Software*144 (October): 105159.

*Machine Learning and the Physical Sciences Workshop at the 33rd Conference on Neural Information Processing Systems (NeurIPS)*, 6.

*Scientific Reports*12 (1): 7557.

*Advances in Neural Information Processing Systems*. Vol. 33.

*NeurIPS*, 5.

*arXiv:2203.10131 [Physics]*, March.

*Third Workshop on Machine Learning and the Physical Sciences (NeurIPS 2020)*.

*Journal of Statistical Software*63 (14).

*Journal of the Royal Statistical Society: Series B (Statistical Methodology)*77 (1): 3–33.

*Journal of Computational Physics*375 (December): 1339–64.

*Statistics and Computing*30 (2): 419–46.

*IEEE Transactions on Pattern Analysis and Machine Intelligence*42 (8): 1968–80.

*arXiv:2006.15641 [Cs, Stat]*, June.

*Physics-Based Deep Learning*. WWW.

*arXiv:2007.00016 [Physics]*, January.

*Array*13 (March): 100110.

*arXiv:1701.07989 [Math]*, April.

*Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining*, 1457–66. KDD ’20. New York, NY, USA: Association for Computing Machinery.

*Advances in Water Resources*163 (May): 104180.

*arXiv:2011.11955 [Cs, Math]*.

*Journal of Computational Physics*425 (January): 109913.

*SIAM Journal on Scientific Computing*42 (1): A292–317.

*Geoscientific Model Development Discussions*, July, 1–51.

*Journal of Computational Physics*411 (June): 109409.

*SIAM Journal on Scientific Computing*42 (2): A639–65.

*Journal of Computational Physics*397 (November): 108850.

*International Conference on Machine Learning*, 27060–74. PMLR.

## 1 comment

## Jack