Machine learning for partial differential equations



\(\newcommand{\solop}{\mathcal{G}^{\dagger}}\)

Using statistical or machine learning approaches to solve PDEs, and maybe even to perform inference through them. There are many approaches to ML learning of PDEs and I will document on an ad hoc basis as I need them. No claim is made to completeness.

TODO: To reduce proliferation of unclear symbols by introducing a specific example; which neural nets represent operators, which represent specific functions, between which spaces etc.

TODO: Harmonise the notation used in this section with subsections below; right now they match the papers’ notation but not each other.

TODO: should the intro section actually be filed under PDEs?

TODO: introduce a consistent notation for coordinate space, output spaces, and function space?

TODO: this is mostly Eulerian fluid flow models right now. Can we mention Lagrangian models at least?

Background

⚠️ this section is a mess and I hate it now ⚠️

Suppose we have a PDE defined over some input domain, which we presume is a time dimension, and some number of spatial dimensions. The PDE is specified by some differential operator \(\mathcal{D}\) and some forcing or boundary condition \(u\in \mathscr{U},\) as \[\mathcal{D}[f]=u.\] These functions map from some coordinate space \(C\) to some output space \(O\). }The first coordinate of the input space often has the special interpretation as time \(t\in \mathbb{R}\) and the subsequent coordinates are then spatial coordinate \(x\in D\subseteq \mathbb{R}^{d_{D}}\) where \(d_{D}=d_{C}-1.\) Sometimes we make this explicit by writing the time coordinate separately as \(f(t,x).\) A common case, concretely, is \(C=\mathbb{R} \times \mathbb{R}^2=\mathbb{R} \times D\) and \(O=\mathbb{R}.\) For each time \(t\in \mathbb{R}\) we assume the instantaneous solution \(f(t, \cdot)\) to be an element of some Banach space \(f\in \mathscr{A}\) of functions \(f(t, \cdot): D\to O.\) The overall solutions \(f: C\to O\) have their own Banach space \(\mathscr{F}\). More particularly, we might consider solutions a restricted time domain \(t\in [0,T]\) and some spatial domain \(D\subseteq \mathbb{R}^2\) where a solution is a function \(f\) that maps \([0,T] \times D \to \mathbb{R}.\) This would naturally model, say, a 2D height-field evolving over time.

We have thrown the term Banach space about without making it clear which one we mean. There are usually some implied smoothness properties and of course we would want to include some kind of metric to fully specify these spaces, but we gloss over that for now.

We have introduced one operator, the defining operator \(\mathcal{D}\) . Another that we think about a lot is the PDE propagator or forward operator \(\mathcal{P}_s,\) which produces a representation of the entire solution surface at some future moment, given current and boundary conditions. \[\mathcal{P}_s[f(t, \cdot)]=f( t+s, \cdot).\] We might also discuss a solution operator \[\solop:\begin{array}{l}\mathscr{U}\to\mathscr{F}\\ u\mapsto f\end{array}\] such that \[\mathcal{D}\left[\solop[u]\right]=u.\]

Handling all these weird, and presumably infinite-dimensional, function spaces \(\mathscr{A},\mathscr{U},\mathscr{F},\dots\) on a finite computer requires use to introduce a notion of discretisation. We need to find some finite-dimensional representations of these functions so that they can be computed in a finite machine. PDE solvers use various tricks to do that, and each one is its own research field. Finite difference approximations treat all the solutions as values on a grid, effectively approximating \(\mathscr{F}\) with some new space of functions \(\mathbb{Z}^2 \times \mathbb{Z} \to \mathbb{R},\) or, if you’d like, in terms of “bar chart” basis functions. Finite element methods define the PDE over a more complicated indexing system of compactly-supported basis functions which form a mesh. Particle systems approximate PDEs with moving particle who define their own adaptive basis. If there is some other natural (preferably orthogonal) basis of functions on the solution surface we might use those, for example with the right structure the eigenfunctions of the defining operator might give us such a basis. Fourier bases are famous in this case.

A classic for neural nets is to learn a finite-difference approximation of the PDE on a grid of values and treat it as a convnet regression, and indeed the dynamical treatment of neural nets is based on that. For various practical reasons I would like to avoid requiring a grid on my input values as much as possible. For one thing, grid systems are memory intensive and need expensive GPUs. For another, it is hard to integrate observations at multiple resolutions into a gridded data system. For a third, the research field of image prediction is too crowded for easy publications. Thus, that will not be treated further.

A grid-free approach is graph networks that learn a topology and interaction system. This seems to naturally map on to PDEs of the kind that we usually solve by particle systems, e.g. fluid dynamics with immiscible substances. Nothing wrong with this idea per se, but it does not seem to be the most compelling approach to me for my domain of spatiotemporal prediction where we already know the topology and can avoid all the complicated bits of graph networks. So this I will also ignore for now.

There are a few options. For an overview of many other techniques see Physics-based Deep Learning by Philipp Holl, Maximilian Mueller, Patrick Schnell, Felix Trost, Nils Thuerey, Kiwon Um (Thuerey et al. 2021). Also, see Brunton and Kutz, Data-Driven Science and Engineering. (Brunton and Kutz 2019) covers related material; both go farther than mere PDEs and consider general scientific settings. Also, the seminar series by the authors of that latter book is a moving feast of the latest results in this area.

Here we look in depth mainly at two important ones.

One approach learns a network \(\hat{f}\in \mathscr{F}, \hat{f}: C \to O\) such that \(\hat{f}\approx f\) (Raissi, Perdikaris, and Karniadakis 2019). This is the annoyingly-named implicit representation trick. Another approach is used in networks like Li, Kovachki, Azizzadenesheli, Liu, Bhattacharya, et al. (2020b) which learn the forward operator \(\mathcal{P}_1: \mathscr{A}\to\mathscr{A}.\) When the papers mentioned talk about operator learning, this is the operator that they seem to mean per default.

Physics-informed approximation of dynamics

This entire idea might seem weird if you are used to typical ML research. Unlike the usual neural network setting, we start by not trying to solve a statistical inference problem, where we have to learn an unknown prediction function from data, but we have a partially or completely known function (PDE solver) that we are trying to approximate with a more convenient substitute (a neural approximation to that PDE solver).

That approximant is not necessarily exciting as a PDE solver, in itself. Probably we could have implemented the reference PDE solver on the GPU, or tweaked it a little, and got a faster PDE solver. Identifying when we have a non-trivial speed benefit from training a Neuyral net to do a thing is a whole project in itself.

However, I would like it if the reference solvers were easier to differentiate through, and to construct posteriors with - what you might call tomography, or inverse problems. But note that we still do not need to use ML methods to day that. In fact, if I already know the PDE operator and am implementing it in any case, I could avoid the learning step and simply implement the PDE using an off-the-shelf differentiable solver, which would allow us to perform this inference.

Nonetheless, we might wish to learn to approximate a PDE, for whatever reason. Perhaps we do not know the governing equations precisely, or something like that. In my case it is that am required to match an industry-standard black-box solver that is not flexible, which is a common reason. YMMV.

There are several approaches to learning the dynamics of a PDE solver for given parameters.

Neural operator

Learning to predict the next step given this step. Think image-to-image regression. A whole topic in itself. See Neural operators.

The PINN lineage

This body of literature encompasses both DeepONet (‘operator learning’) and PINN (‘physics informed neural nets’) approaches. Distinctions TBD.

See PINNs.

Neural operator

Learning to predict the next step given this step. Think image-to-image regression. A whole topic in itself. See Neural operators.

Message passing methods

TBD

DeepONet

See operator learning.

Adversarial approaches

One approach I am less familiar with advocates (conditional) GAN models to simulate (conditional) latent distributions. I’m curious about these but they look more computationally expensive and specific than I need at the moment, so I’m filing for later (G. Bao et al. 2020; Yang, Zhang, and Karniadakis 2020; Zang et al. 2020).

A recent examples from fluid-flow dynamics (Chu et al. 2021) has particularly beautiful animations:

Advection-diffusion PDEs in particular

F. Sigrist, Künsch, and Stahel (2015b) finds a nice spectral representation of certain classes of stochastic PDE. These are extended in Liu, Yeo, and Lu (2020) to non-stationary operators. By being less generic, these come out with computationally convenient spectral representations.

Inverse problems

Tomography through PDEs.

See Inverse problems in PDEs.

As implicit representations

Many of these PDE methods effectively use the “implicit representation” trick, i.e. they produce networks that map from input coordinates to values of solutions at those coordinates. This means we share some interesting tools with those networks, such as position encodings. TBD.

Differentiable solvers

Suppose we are keen to devise yet another method that will do clever things to augment PDE solvers with ML somehow. To that end it would be nice to have a PDE solver that was not a completely black box but which we could interrogate for useful gradients. Obviously all PDE solvers use gradient information, but only some of them expose that to us as users; e.g. MODFLOW will give me a solution field but not the gradients of the field that were used to calculate that solution, neither spatial gradients nor the sensitivity of the parameters. In ML toolkits, accessing this information is easy.

TODO: define adjoint method etc.

OTOH, there is a lot of sophisticated work done by PDE solvers that is hard for ML toolkits to recreate. That is why PDE solvers are a thing.

Tools which combine both worlds, PDE solutions and ML optimisations, do exist; there are adjoint method systems for mainstream PDE solvers just as there are PDE solvers for ML frameworks. Let us list some of the options under differentiable PDE solvers.

Datasets and training harnesses

As with more typical nerual net applications, PDE emulators can be trained from datasets. Here are some

But if we have a simulator, we can run it live and generate data on the fly. Here is a tool to facilitate that.

Inria’s Melissa

Melissa is a file avoiding, fault tolerant and elastic framework, to run large scale sensitivity analysis (Melissa-SA) and large scale deep surrogate training (Melissa-DL) on supercomputers. With Melissa-SA, largest runs so far involved up to 30k core, executed 80 000 parallel simulations, and generated 288 TB of intermediate data that did not need to be stored on the file system …

Classical sensitivity analysis and deep surrogate training consist in running different instances of a simulation with different set of input parameters, store the results to disk to later read them back to train a Neural Network or to compute the required statistics. The amount of storage needed can quickly become overwhelming, with the associated long read time that makes data processing time consuming. To avoid this pitfall, scientists reduce their study size by running low resolution simulations or down-sampling output data in space and time.

Melissa (Fig. 1) bypasses this limitation by avoiding intermediate file storage. Melissa processes the data online (in transit) enabling very large scale data processing:

Tooling

Torchphysics

boschresearch/torchphysics/Tutorial: Understanding the structure of TorchPhysics

TorchPhysics is a Python library of (mesh-free) deep learning methods to solve differential equations. You can use TorchPhysics e.g. to

  • solve ordinary and partial differential equations
  • train a neural network to approximate solutions for different parameters
  • solve inverse problems and interpolate external data

The following approaches are implemented using high-level concepts to make their usage as easy as possible:

  • physics-informed neural networks (PINN)
  • QRes
  • the Deep Ritz method
  • DeepONets and Physics-Informed DeepONets

DeepXDE

DeepXDE is a reference solver implementation for PINN and DeepONet (Lu et al. 2021).

Use DeepXDE if you need a deep learning library that

  • solves forward and inverse partial differential equations (PDEs) via physics-informed neural network (PINN),
  • solves forward and inverse integro-differential equations (IDEs) via PINN,
  • solves forward and inverse fractional partial differential equations (fPDEs) via fractional PINN (fPINN),
  • approximates functions from multi-fidelity data via multi-fidelity NN (MFNN),
  • approximates nonlinear operators via deep operator network (DeepONet),
  • approximates functions from a dataset with/without constraints.

You might need to moderate your expectations a little. I did, after that bold description. This is an impressive library, but as covered above, some of the types of problems that it can solve are more limited than one might hope upon reading the description. Think of it as a neural network library that handles certain PDE calculations and you will not go too far astray.

NeuralOperator

Neural Operators in PyTorch:

neuraloperator is a comprehensive library for learning neural operators in PyTorch. It is the official implementation for Fourier Neural Operators and Tensorized Neural Operators.

Modulus

NVIDIA’s MODULUS (formerly SimNet) (Hennigh et al. 2020) has the full marketing muscle of NVIDIA behind it.

Not currently recommended, due to comically clunky distribution system and onerous licensing.

Notable clauses from the license:

  1. LIMITATIONS. Your license to use the Modulus Deliverables is restricted as follows:
  1. The Modulus Deliverables are licensed for you to develop services and applications only for their use in systems with NVIDIA GPUs.
  2. You may not reverse engineer, decompile or disassemble, or remove copyright or other proprietary notices from any portion of the Modulus Deliverables or copies of the Modulus Deliverables.
  3. Except as expressly provided in this license, you may not copy, sell, rent, sublicense, transfer, distribute, modify, or create derivative works of any portion of the Modulus Deliverables. For clarity, you may not distribute or sublicense the Modulus Deliverables as a stand-alone product.

They run just fine on google colab, but I am not sure if that is legal.

CliffordLayers

Surprising twist: Clifford algebras are useful for ML+PDEs.

microsoft/cliffordlayers/ CliffordLayers

We propose Geometric Clifford Algebra Networks (GCANs) that are based on symmetry group transformations using geometric (Clifford) algebras. GCANs are particularly well-suited for representing and manipulating geometric transformations, often found in dynamical systems. We first review the quintessence of modern (plane-based) geometric algebra, which builds on isometries encoded as elements of the Pin(p,q,r) group. We then propose the concept of group action layers, which linearly combine object transformations using pre-specified group actions. Together with a new activation and normalization scheme, these layers serve as adjustable geometric templates that can be refined via gradient descent. Theoretical advantages are strongly reflected in the modeling of three-dimensional rigid body transformations as well as large-scale fluid dynamics simulations, showing significantly improved performance over traditional methods.

References

Alexanderian, Alen. 2021. Optimal Experimental Design for Infinite-Dimensional Bayesian Inverse Problems Governed by PDEs: A Review.” arXiv:2005.12998 [Math], January.
Alexanderian, Alen, Noemi Petra, Georg Stadler, and Omar Ghattas. 2016. A Fast and Scalable Method for A-Optimal Design of Experiments for Infinite-Dimensional Bayesian Nonlinear Inverse Problems.” SIAM Journal on Scientific Computing 38 (1): A243–72.
Altmann, Robert, Patrick Henning, and Daniel Peterseim. 2021. Numerical Homogenization Beyond Scale Separation.” Acta Numerica 30 (May): 1–86.
Arora, Sanjeev, Rong Ge, Tengyu Ma, and Ankur Moitra. 2015. Simple, Efficient, and Neural Algorithms for Sparse Coding.” In Proceedings of The 28th Conference on Learning Theory, 40:113–49. Paris, France: PMLR.
Atkinson, Steven, Waad Subber, and Liping Wang. 2019. “Data-Driven Discovery of Free-Form Governing Differential Equations.” In, 7.
Bao, Gang, Xiaojing Ye, Yaohua Zang, and Haomin Zhou. 2020. Numerical Solution of Inverse Problems by Weak Adversarial Networks.” Inverse Problems 36 (11): 115003.
Bao, Tianshu, Shengyu Chen, Taylor T. Johnson, Peyman Givi, Shervin Sammak, and Xiaowei Jia. 2022. Physics Guided Neural Networks for Spatio-Temporal Super-Resolution of Turbulent Flows.” In Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, 118–28. PMLR.
Bar-Sinai, Yohai, Stephan Hoyer, Jason Hickey, and Michael P. Brenner. 2019. Learning Data-Driven Discretizations for Partial Differential Equations.” Proceedings of the National Academy of Sciences 116 (31): 15344–49.
Basir, Shamsulhaq, and Inanc Senocak. n.d. Critical Investigation of Failure Modes in Physics-Informed Neural Networks.” In AIAA SCITECH 2022 Forum. American Institute of Aeronautics and Astronautics.
Beck, Christian, Weinan E, and Arnulf Jentzen. 2019. Machine Learning Approximation Algorithms for High-Dimensional Fully Nonlinear Partial Differential Equations and Second-Order Backward Stochastic Differential Equations.” Journal of Nonlinear Science 29 (4): 1563–1619.
Bezgin, Deniz A., Aaron B. Buhendwa, and Nikolaus A. Adams. 2022. JAX-FLUIDS: A Fully-Differentiable High-Order Computational Fluid Dynamics Solver for Compressible Two-Phase Flows.” arXiv:2203.13760 [Physics], March.
Bhattacharya, Kaushik, Bamdad Hosseini, Nikola B. Kovachki, and Andrew M. Stuart. 2020. Model Reduction and Neural Networks for Parametric PDEs.” arXiv:2005.03180 [Cs, Math, Stat], May.
Blechschmidt, Jan, and Oliver G. Ernst. 2021. Three Ways to Solve Partial Differential Equations with Neural Networks — A Review.” GAMM-Mitteilungen 44 (2): e202100006.
Bottero, Luca, Francesco Calisto, Giovanni Graziano, Valerio Pagliarino, Martina Scauda, Sara Tiengo, and Simone Azeglio. 2020. Physics-Informed Machine Learning Simulator for Wildfire Propagation,” December.
Brandstetter, Johannes, Rianne van den Berg, Max Welling, and Jayesh K. Gupta. 2022. Clifford Neural Layers for PDE Modeling.” In.
Brandstetter, Johannes, Daniel Worrall, and Max Welling. 2022. Message Passing Neural PDE Solvers.” In International Conference on Learning Representations.
Brehmer, Johann, Kyle Cranmer, Siddharth Mishra-Sharma, Felix Kling, and Gilles Louppe. 2019. “Mining Gold: Improving Simulation-Based Inference with Latent Information.” In, 7.
Brenner, M. P., J. D. Eldredge, and J. B. Freund. 2019. Perspective on Machine Learning for Advancing Fluid Mechanics.” Physical Review Fluids 4 (10): 100501.
Brunton, Steven L., and Jose Nathan Kutz. 2019. Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control. Cambridge: Cambridge University Press.
Brunton, Steven L., Bernd R. Noack, and Petros Koumoutsakos. 2020. Machine Learning for Fluid Mechanics.” Annual Review of Fluid Mechanics 52 (1): 477–508.
Chu, Mengyu, Nils Thuerey, Hans-Peter Seidel, Christian Theobalt, and Rhaleb Zayer. 2021. Learning Meaningful Controls for Fluids.” ACM Transactions on Graphics 40 (4): 1–13.
Cockayne, Jon, and Andrew B. Duncan. 2020. Probabilistic Gradients for Fast Calibration of Differential Equation Models,” September.
Cranmer, Miles D, Rui Xu, Peter Battaglia, and Shirley Ho. 2019. “Learning Symbolic Physics with Graph Networks.” In Machine Learning and the Physical Sciences Workshop at the 33rd Conference on Neural Information Processing Systems (NeurIPS), 6.
Dandekar, Raj, Karen Chung, Vaibhav Dixit, Mohamed Tarek, Aslan Garcia-Valadez, Krishna Vishal Vemula, and Chris Rackauckas. 2021. Bayesian Neural Ordinary Differential Equations.” arXiv:2012.07244 [Cs], March.
Daw, Arka, Jie Bu, Sifan Wang, Paris Perdikaris, and Anuj Karpatne. 2022. Rethinking the Importance of Sampling in Physics-Informed Neural Networks.” arXiv.
Di Giovanni, Francesco, James Rowbottom, Benjamin P. Chamberlain, Thomas Markovich, and Michael M. Bronstein. 2022. Graph Neural Networks as Gradient Flows.” arXiv.
Duffin, Connor, Edward Cripps, Thomas Stemler, and Mark Girolami. 2021. Statistical Finite Elements for Misspecified Models.” Proceedings of the National Academy of Sciences 118 (2).
Duraisamy, Karthik, Gianluca Iaccarino, and Heng Xiao. 2019. Turbulence Modeling in the Age of Data.” Annual Review of Fluid Mechanics 51 (1): 357–77.
E, Weinan. 2021. The Dawning of a New Era in Applied Mathematics.” Notices of the American Mathematical Society 68 (04): 1.
E, Weinan, Jiequn Han, and Arnulf Jentzen. 2017. Deep Learning-Based Numerical Methods for High-Dimensional Parabolic Partial Differential Equations and Backward Stochastic Differential Equations.” Communications in Mathematics and Statistics 5 (4): 349–80.
———. 2020. Algorithms for Solving High Dimensional PDEs: From Nonlinear Monte Carlo to Machine Learning.” arXiv:2008.13333 [Cs, Math], September.
E, Weinan, and Bing Yu. 2018. The Deep Ritz Method: A Deep Learning-Based Numerical Algorithm for Solving Variational Problems.” Communications in Mathematics and Statistics 6 (1): 1–12.
Eigel, Martin, Reinhold Schneider, Philipp Trunschke, and Sebastian Wolf. 2019. Variational Monte Carlo — Bridging Concepts of Machine Learning and High Dimensional Partial Differential Equations.” Advances in Computational Mathematics 45 (5-6): 2503–32.
Fan, Yuwei, Cindy Orozco Bohorquez, and Lexing Ying. 2019. BCR-Net: A Neural Network Based on the Nonstandard Wavelet Form.” Journal of Computational Physics 384 (May): 1–15.
Faroughi, Salah A., Nikhil Pawar, Celio Fernandes, Maziar Raissi, Subasish Das, Nima K. Kalantari, and Seyed Kourosh Mahjour. 2023. Physics-Guided, Physics-Informed, and Physics-Encoded Neural Networks in Scientific Computing.” arXiv.
Finzi, Marc, Roberto Bondesan, and Max Welling. 2020. Probabilistic Numeric Convolutional Neural Networks.” arXiv:2010.10876 [Cs], October.
Freeman, C Daniel, Erik Frey, Anton Raichuk, Sertan Girgin, Igor Mordatch, and Olivier Bachem. 2021. Brax–A Differentiable Physics Engine for Large Scale Rigid Body Simulation.” arXiv Preprint arXiv:2106.13281.
Frerix, Thomas, Dmitrii Kochkov, Jamie A. Smith, Daniel Cremers, Michael P. Brenner, and Stephan Hoyer. 2021. Variational Data Assimilation with a Learned Inverse Observation Operator.” arXiv.
Gan, Chuang, Jeremy Schwartz, Seth Alter, Martin Schrimpf, James Traer, Julian De Freitas, Jonas Kubilius, et al. 2020. Threedworld: A Platform for Interactive Multi-Modal Physical Simulation.” arXiv Preprint arXiv:2007.04954.
Ghattas, Omar, and Karen Willcox. 2021. Learning Physics-Based Models from Data: Perspectives from Inverse Problems and Model Reduction.” Acta Numerica 30 (May): 445–554.
Girolami, Mark, Eky Febrianto, Ge Yin, and Fehmi Cirak. 2021. The Statistical Finite Element Method (statFEM) for Coherent Synthesis of Observation Data and Model Predictions.” Computer Methods in Applied Mechanics and Engineering 375 (March): 113533.
Goswami, Somdatta, Aniruddha Bora, Yue Yu, and George Em Karniadakis. 2022. Physics-Informed Deep Neural Operator Networks,” July.
Granas, Andrzej, and James Dugundji. 2003. Fixed Point Theory. Springer Monographs in Mathematics. New York, NY: Springer New York.
Grohs, Philipp, and Lukas Herrmann. 2022. Deep Neural Network Approximation for High-Dimensional Elliptic PDEs with Boundary Conditions.” IMA Journal of Numerical Analysis 42 (3): 2055–82.
Guibas, John, Morteza Mardani, Zongyi Li, Andrew Tao, Anima Anandkumar, and Bryan Catanzaro. 2021. Adaptive Fourier Neural Operators: Efficient Token Mixers for Transformers,” November.
Gulian, Mamikon, Ari Frankel, and Laura Swiler. 2020. Gaussian Process Regression Constrained by Boundary Value Problems.” arXiv:2012.11857 [Cs, Math, Stat], December.
Guo, Mengwu, and Jan S. Hesthaven. 2019. Data-Driven Reduced Order Modeling for Time-Dependent Problems.” Computer Methods in Applied Mechanics and Engineering 345 (March): 75–99.
Han, Jiequn, Arnulf Jentzen, and Weinan E. 2018. Solving High-Dimensional Partial Differential Equations Using Deep Learning.” Proceedings of the National Academy of Sciences 115 (34): 8505–10.
Hennigh, Oliver, Susheela Narasimhan, Mohammad Amin Nabian, Akshay Subramaniam, Kaustubh Tangsali, Max Rietmann, Jose del Aguila Ferrandis, Wonmin Byeon, Zhiwei Fang, and Sanjay Choudhry. 2020. NVIDIA SimNet™️: An AI-Accelerated Multi-Physics Simulation Framework.” arXiv:2012.07938 [Physics], December.
Hoffimann, Júlio, Maciel Zortea, Breno de Carvalho, and Bianca Zadrozny. 2021. Geostatistical Learning: Challenges and Opportunities.” Frontiers in Applied Mathematics and Statistics 7.
Holl, Philipp, Vladlen Koltun, Kiwon Um, and Nils Thuerey. 2020. Phiflow: A Differentiable PDE Solving Framework for Deep Learning via Physical Simulations.” In NeurIPS Workshop.
Holl, Philipp, Nils Thuerey, and Vladlen Koltun. 2020. Learning to Control PDEs with Differentiable Physics.” In ICLR, 5.
Holzschuh, Benjamin, Simona Vegetti, and Nils Thuerey. 2022. “Score Matching via Differentiable Physics,” 7.
Hu, Yuanming, Tzu-Mao Li, Luke Anderson, Jonathan Ragan-Kelley, and Frédo Durand. 2019. Taichi: A Language for High-Performance Computation on Spatially Sparse Data Structures.” ACM Transactions on Graphics 38 (6): 1–16.
Huang, Zizhou, Teseo Schneider, Minchen Li, Chenfanfu Jiang, Denis Zorin, and Daniele Panozzo. 2021. A Large-Scale Benchmark for the Incompressible Navier-Stokes Equations.” arXiv:2112.05309 [Cs], December.
Innes, Mike, Alan Edelman, Keno Fischer, Chris Rackauckas, Elliot Saba, Viral B. Shah, and Will Tebbutt. 2019. A Differentiable Programming System to Bridge Machine Learning and Scientific Computing.” arXiv.
Jiang, Chiyu Max, Soheil Esmaeilzadeh, Kamyar Azizzadenesheli, Karthik Kashinath, Mustafa Mustafa, Hamdi A. Tchelepi, Philip Marcus, Prabhat, and Anima Anandkumar. 2020. MeshfreeFlowNet: A Physics-Constrained Deep Continuous Space-Time Super-Resolution Framework,” May.
Jin, Hanxun, Enrui Zhang, and Horacio D. Espinosa. 2023. Recent Advances and Applications of Machine Learning in Experimental Solid Mechanics: A Review.” arXiv.
Jo, Hyeontae, Hwijae Son, Hyung Ju Hwang, and Eun Heui Kim. 2020. Deep Neural Network Approach to Forward-Inverse Problems.” Networks & Heterogeneous Media 15 (2): 247.
Kadri, Hachem, Emmanuel Duflos, Philippe Preux, Stéphane Canu, Alain Rakotomamonjy, and Julien Audiffren. 2016. Operator-Valued Kernels for Learning from Functional Response Data.” The Journal of Machine Learning Research 17 (1): 613–66.
Karniadakis, George Em, Ioannis G. Kevrekidis, Lu Lu, Paris Perdikaris, Sifan Wang, and Liu Yang. 2021. Physics-Informed Machine Learning.” Nature Reviews Physics 3 (6): 422–40.
Kasim, M. F., D. Watson-Parris, L. Deaconu, S. Oliver, P. Hatfield, D. H. Froula, G. Gregori, et al. 2020. Up to Two Billion Times Acceleration of Scientific Simulations with Deep Neural Architecture Search.” arXiv:2001.08055 [Physics, Stat], January.
Kasim, Muhammad, J Topp-Mugglestone, P Hatfield, D H Froula, G Gregori, M Jarvis, E Viezzer, and Sam Vinko. 2019. “A Million Times Speed up in Parameters Retrieval with Deep Learning.” In, 5.
Kharazmi, E., Z. Zhang, and G. E. Karniadakis. 2019. Variational Physics-Informed Neural Networks For Solving Partial Differential Equations.” arXiv:1912.00873 [Physics, Stat], November.
Khodayi-Mehr, Reza, and Michael M. Zavlanos. 2019. VarNet: Variational Neural Networks for the Solution of Partial Differential Equations.” arXiv:1912.07443 [Physics, Stat], December.
Kochkov, Dmitrii, Alvaro Sanchez-Gonzalez, Jamie Smith, Tobias Pfaff, Peter Battaglia, and Michael P Brenner. 2020. “Learning Latent FIeld Dynamics of PDEs.” In Machine Learning and the Physical Sciences Workshop at the 33rd Conference on Neural Information Processing Systems (NeurIPS), 7.
Kochkov, Dmitrii, Jamie A. Smith, Ayya Alieva, Qing Wang, Michael P. Brenner, and Stephan Hoyer. 2021. Machine Learning–Accelerated Computational Fluid Dynamics.” Proceedings of the National Academy of Sciences 118 (21).
Kononenko, O., and I. Kononenko. 2018. Machine Learning and Finite Element Method for Physical Systems Modeling.” arXiv:1801.07337 [Physics], March.
Kovachki, Nikola, Samuel Lanthaler, and Siddhartha Mishra. 2021. On Universal Approximation and Error Bounds for Fourier Neural Operators.” arXiv:2107.07562 [Cs, Math], July.
Kovachki, Nikola, Zongyi Li, Burigede Liu, Kamyar Azizzadenesheli, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. 2021. Neural Operator: Learning Maps Between Function Spaces.” In arXiv:2108.08481 [Cs, Math].
Krämer, Nicholas, Nathanael Bosch, Jonathan Schmidt, and Philipp Hennig. 2021. Probabilistic ODE Solutions in Millions of Dimensions.” arXiv.
Krishnapriyan, Aditi, Amir Gholami, Shandian Zhe, Robert Kirby, and Michael W Mahoney. 2021. Characterizing Possible Failure Modes in Physics-Informed Neural Networks.” In Advances in Neural Information Processing Systems, 34:26548–60. Curran Associates, Inc.
Lagaris, I.E., A. Likas, and D.I. Fotiadis. 1998. Artificial Neural Networks for Solving Ordinary and Partial Differential Equations.” IEEE Transactions on Neural Networks 9 (5): 987–1000.
Lei, Huan, Jing Li, Peiyuan Gao, Panos Stinis, and Nathan Baker. 2018. A Data-Driven Framework for Sparsity-Enhanced Surrogates with Arbitrary Mutually Dependent Randomness,” April.
Li, Zongyi, Daniel Zhengyu Huang, Burigede Liu, and Anima Anandkumar. 2022. Fourier Neural Operator with Learned Deformations for PDEs on General Geometries.” arXiv.
Li, Zongyi, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. 2020a. Neural Operator: Graph Kernel Network for Partial Differential Equations.” In. arXiv.
———. 2020b. Fourier Neural Operator for Parametric Partial Differential Equations.” arXiv:2010.08895 [Cs, Math], October.
Li, Zongyi, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Andrew Stuart, Kaushik Bhattacharya, and Anima Anandkumar. 2020. Multipole Graph Neural Operator for Parametric Partial Differential Equations.” In Advances in Neural Information Processing Systems. Vol. 33.
Li, Zongyi, Hongkai Zheng, Nikola Borislavov Kovachki, David Jin, Haoxuan Chen, Burigede Liu, Andrew Stuart, Kamyar Azizzadenesheli, and Anima Anandkumar. 2021. Physics-Informed Neural Operator for Learning Partial Differential Equations,” November.
Lian, Heng. 2007. Nonlinear Functional Models for Functional Responses in Reproducing Kernel Hilbert Spaces.” Canadian Journal of Statistics 35 (4): 597–606.
Liao, Yulei, and Pingbing Ming. 2021. Deep Nitsche Method: Deep Ritz Method with Essential Boundary Conditions.”
Lienen, Marten, and Stephan Günnemann. 2021. Learning the Dynamics of Physical Systems from Sparse Observations with Finite Element Networks.” In International Conference on Learning Representations.
Liu, Xiao, Kyongmin Yeo, and Siyuan Lu. 2020. Statistical Modeling for Spatio-Temporal Data From Stochastic Convection-Diffusion Processes.” Journal of the American Statistical Association 0 (0): 1–18.
Long, Da, Zheng Wang, Aditi Krishnapriyan, Robert Kirby, Shandian Zhe, and Michael Mahoney. 2022. AutoIP: A United Framework to Integrate Physics into Gaussian Processes.” arXiv.
Long, Zichao, Yiping Lu, Xianzhong Ma, and Bin Dong. 2018. PDE-Net: Learning PDEs from Data.” In Proceedings of the 35th International Conference on Machine Learning, 3208–16. PMLR.
Lu, Lu, Pengzhan Jin, and George Em Karniadakis. 2020. DeepONet: Learning Nonlinear Operators for Identifying Differential Equations Based on the Universal Approximation Theorem of Operators.” arXiv:1910.03193 [Cs, Stat], April.
Lu, Lu, Xuhui Meng, Zhiping Mao, and George Em Karniadakis. 2021. DeepXDE: A Deep Learning Library for Solving Differential Equations.” SIAM Review 63 (1): 208–28.
Ma, Yingbo, Shashi Gowda, Ranjan Anantharaman, Chris Laughman, Viral Shah, and Chris Rackauckas. 2021. ModelingToolkit: A Composable Graph Transformation System For Equation-Based Modeling,” March.
Magnani, Emilia, Nicholas Krämer, Runa Eschenhagen, Lorenzo Rosasco, and Philipp Hennig. 2022. Approximate Bayesian Neural Operators: Uncertainty Quantification for Parametric PDEs.” arXiv.
Meng, Xuhui, Hessam Babaee, and George Em Karniadakis. 2021. Multi-Fidelity Bayesian Neural Networks: Algorithms and Applications.” Journal of Computational Physics 438 (August): 110361.
Mitusch, Sebastian K., Simon W. Funke, and Jørgen S. Dokken. 2019. Dolfin-Adjoint 2018.1: Automated Adjoints for FEniCS and Firedrake.” Journal of Open Source Software 4 (38): 1292.
Mowlavi, Saviz, and Saleh Nabi. 2021. Optimal Control of PDEs Using Physics-Informed Neural Networks.” arXiv:2111.09880 [Physics], November.
Müller, Johannes, and Marius Zeinhofer. 2020. Deep Ritz Revisited.” arXiv.
Nabian, Mohammad Amin, and Hadi Meidani. 2019. A Deep Learning Solution Approach for High-Dimensional Random Differential Equations.” Probabilistic Engineering Mechanics 57 (July): 14–25.
Naumann, Uwe. 2011. The Art of Differentiating Computer Programs: An Introduction to Algorithmic Differentiation. Society for Industrial and Applied Mathematics.
Négiar, Geoffrey, Michael W. Mahoney, and Aditi S. Krishnapriyan. 2022. Learning Differentiable Solvers for Systems with Hard Constraints.” arXiv.
O’Hagan, Anthony. 2013. “Polynomial Chaos: A Tutorial and Critique from a Statistician’s Perspective,” 20.
Oladyshkin, S., and W. Nowak. 2012. Data-Driven Uncertainty Quantification Using the Arbitrary Polynomial Chaos Expansion.” Reliability Engineering & System Safety 106 (October): 179–90.
Opschoor, Joost A. A., Philipp C. Petersen, and Christoph Schwab. 2020. Deep ReLU Networks and High-Order Finite Element Methods.” Analysis and Applications 18 (05): 715–70.
Otness, Karl, Arvi Gjoka, Joan Bruna, Daniele Panozzo, Benjamin Peherstorfer, Teseo Schneider, and Denis Zorin. 2021. An Extensible Benchmark Suite for Learning to Simulate Physical Systems.” In.
Pathak, Jaideep, Shashank Subramanian, Peter Harrington, Sanjeev Raja, Ashesh Chattopadhyay, Morteza Mardani, Thorsten Kurth, et al. 2022. Fourcastnet: A Global Data-Driven High-Resolution Weather Model Using Adaptive Fourier Neural Operators,” February, 28.
Perdikaris, Paris, Daniele Venturi, and George Em Karniadakis. 2016. Multifidelity Information Fusion Algorithms for High-Dimensional Systems and Massive Data Sets.” SIAM Journal on Scientific Computing 38 (4): B521–38.
Perdikaris, P., D. Venturi, J. O. Royset, and G. E. Karniadakis. 2015. Multi-Fidelity Modelling via Recursive Co-Kriging and Gaussian–Markov Random Fields.” Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 471 (2179): 20150018.
Pestourie, Raphaël, Youssef Mroueh, Chris Rackauckas, Payel Das, and Steven G. Johnson. 2022. Physics-Enhanced Deep Surrogates for PDEs.” arXiv.
Pestourie, Raphaël, Youssef Mroueh, Christopher Vincent Rackauckas, Payel Das, and Steven Glenn Johnson. 2021. Data-Efficient Training with Physics-Enhanced Deep Surrogates.” In.
Poli, Michael, Stefano Massaroli, Federico Berto, Jinkyoo Park, Tri Dao, Christopher Ré, and Stefano Ermon. 2022. Transform Once: Efficient Operator Learning in Frequency Domain.” In Advances in Neural Information Processing Systems, 35:7947–59.
Qian, Elizabeth, Boris Kramer, Benjamin Peherstorfer, and Karen Willcox. 2020. Lift & Learn: Physics-Informed Machine Learning for Large-Scale Nonlinear Dynamical Systems.” Physica D: Nonlinear Phenomena 406 (May): 132401.
Rackauckas, Chris, Alan Edelman, Keno Fischer, Mike Innes, Elliot Saba, Viral B Shah, and Will Tebbutt. 2020. Generalized Physics-Informed Learning Through Language-Wide Differentiable Programming.” MIT Web Domain, 6.
Rackauckas, Christopher. 2019. The Essential Tools of Scientific Machine Learning (Scientific ML).”
Raissi, Maziar, Paris Perdikaris, and George Em Karniadakis. 2017a. Physics Informed Deep Learning (Part I): Data-Driven Solutions of Nonlinear Partial Differential Equations,” November.
———. 2017b. Physics Informed Deep Learning (Part II): Data-Driven Discovery of Nonlinear Partial Differential Equations,” November.
Raissi, Maziar, P. Perdikaris, and George Em Karniadakis. 2019. Physics-Informed Neural Networks: A Deep Learning Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations.” Journal of Computational Physics 378 (February): 686–707.
Ramsundar, Bharath, Dilip Krishnamurthy, and Venkatasubramanian Viswanathan. 2021. Differentiable Physics: A Position Piece.” arXiv:2109.07573 [Physics], September.
Ray, Deep, Orazio Pinti, and Assad A. Oberai. 2023. Deep Learning and Computational Physics (Lecture Notes).”
Razavi, Saman. 2021. Deep Learning, Explained: Fundamentals, Explainability, and Bridgeability to Process-Based Modelling.” Environmental Modelling & Software 144 (October): 105159.
Rezende, Danilo J, Sébastien Racanière, Irina Higgins, and Peter Toth. 2019. “Equivariant Hamiltonian Flows.” In Machine Learning and the Physical Sciences Workshop at the 33rd Conference on Neural Information Processing Systems (NeurIPS), 6.
Rodriguez-Torrado, Ruben, Pablo Ruiz, Luis Cueto-Felgueroso, Michael Cerny Green, Tyler Friesen, Sebastien Matringe, and Julian Togelius. 2022. Physics-Informed Attention-Based Neural Network for Hyperbolic Partial Differential Equations: Application to the Buckley–Leverett Problem.” Scientific Reports 12 (1): 7557.
Ruhe, David, Jayesh K Gupta, Steven de Keninck, Max Welling, and Johannes Brandstetter. 2023. Geometric Clifford Algebra Networks.” In arXiv Preprint arXiv:2302.06594.
Saha, Akash, and Palaniappan Balamurugan. 2020. Learning with Operator-Valued Kernels in Reproducing Kernel Krein Spaces.” In Advances in Neural Information Processing Systems. Vol. 33.
Sarkar, Soumalya, and Michael Joly. 2019. Multi-FIdelity Learning with Heterogeneous Domains.” In NeurIPS, 5.
Schnell, Patrick, Philipp Holl, and Nils Thuerey. 2022. Half-Inverse Gradients for Physical Deep Learning.” arXiv:2203.10131 [Physics], March.
Shankar, Varun, Gavin D Portwood, Arvind T Mohan, Peetak P Mitra, Christopher Rackauckas, Lucas A Wilson, David P Schmidt, and Venkatasubramanian Viswanathan. 2020. “Learning Non-Linear Spatio-Temporal Dynamics with Convolutional Neural ODEs.” In Third Workshop on Machine Learning and the Physical Sciences (NeurIPS 2020).
Shi, Zheng, Nur Sila Gulgec, Albert S. Berahas, Shamim N. Pakzad, and Martin Takáč. 2020. Finite Difference Neural Networks: Fast Prediction of Partial Differential Equations.” arXiv.
Sigrist, Fabio Roman Albert. 2013. Physics Based Dynamic Modeling of Space-Time Data.” Application/pdf. ETH Zurich.
Sigrist, Fabio, Hans R. Künsch, and Werner A. Stahel. 2015a. Spate : An R Package for Spatio-Temporal Modeling with a Stochastic Advection-Diffusion Process.” Application/pdf. Journal of Statistical Software 63 (14).
———. 2015b. Stochastic Partial Differential Equation Based Modelling of Large Space-Time Data Sets.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 77 (1): 3–33.
Silvester, Steven, Anthony Tanbakuchi, Paul Müller, Juan Nunez-Iglesias, Mark Harfouche, Almar Klein, Matt McCormick, et al. 2020. Imageio/Imageio V0.9.0.” Zenodo.
Sirignano, Justin, and Konstantinos Spiliopoulos. 2018. DGM: A Deep Learning Algorithm for Solving Partial Differential Equations.” Journal of Computational Physics 375 (December): 1339–64.
Solin, Arno, and Simo Särkkä. 2020. Hilbert Space Methods for Reduced-Rank Gaussian Process Regression.” Statistics and Computing 30 (2): 419–46.
Stachenfeld, Kimberly, Drummond B. Fielding, Dmitrii Kochkov, Miles Cranmer, Tobias Pfaff, Jonathan Godwin, Can Cui, Shirley Ho, Peter Battaglia, and Alvaro Sanchez-Gonzalez. 2022. Learned Coarse Models for Efficient Turbulence Simulation.” arXiv.
Sulam, Jeremias, Aviad Aberdam, Amir Beck, and Michael Elad. 2020. On Multi-Layer Basis Pursuit, Efficient Algorithms and Convolutional Neural Networks.” IEEE Transactions on Pattern Analysis and Machine Intelligence 42 (8): 1968–80.
Tait, Daniel J., and Theodoros Damoulas. 2020. Variational Autoencoding of PDE Inverse Problems.” arXiv:2006.15641 [Cs, Stat], June.
Takamoto, Makoto, Timothy Praditia, Raphael Leiteritz, Dan MacKinlay, Francesco Alesiani, Dirk Pflüger, and Mathias Niepert. 2022. PDEBench: An Extensive Benchmark for Scientific Machine Learning.” In.
Tartakovsky, Alexandre M., Carlos Ortiz Marrero, Paris Perdikaris, Guzel D. Tartakovsky, and David Barajas-Solano. 2018. Learning Parameters and Constitutive Relationships with Physics Informed Deep Neural Networks,” August.
Thuerey, Nils, Philipp Holl, Maximilian Mueller, Patrick Schnell, Felix Trost, and Kiwon Um. 2021. Physics-Based Deep Learning. WWW.
Torrado, Ruben Rodriguez, Pablo Ruiz, Luis Cueto-Felgueroso, Michael Cerny Green, Tyler Friesen, Sebastien Matringe, and Julian Togelius. 2021. Physics-Informed Attention-Based Neural Network for Solving Non-Linear Partial Differential Equations,” December.
Um, Kiwon, Robert Brand, Yun Fei, Philipp Holl, and Nils Thuerey. 2021. Solver-in-the-Loop: Learning from Differentiable Physics to Interact with Iterative PDE-Solvers.” arXiv:2007.00016 [Physics], January.
Um, Kiwon, and Philipp Holl. 2021. “Differentiable Physics for Improving the Accuracy of Iterative PDE-Solvers with Neural Networks.” In, 5.
Vadyala, Shashank Reddy, Sai Nethra Betgeri, and Naga Parameshwari Betgeri. 2022. Physics-Informed Neural Network Method for Solving One-Dimensional Advection Equation Using PyTorch.” Array 13 (March): 100110.
Wacker, Philipp. 2017. Laplace’s Method in Bayesian Inverse Problems.” arXiv:1701.07989 [Math], April.
Wang, Chulin, Eloisa Bentivegna, Wang Zhou, Levente J Klein, and Bruce Elmegreen. 2020. “Physics-Informed Neural Network Super Resolution for Advection-Diffusion Models.” In, 9.
Wang, Rui, Karthik Kashinath, Mustafa Mustafa, Adrian Albert, and Rose Yu. 2020. Towards Physics-Informed Deep Learning for Turbulent Flow Prediction.” In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 1457–66. KDD ’20. New York, NY, USA: Association for Computing Machinery.
Wang, Sifan, Xinling Yu, and Paris Perdikaris. 2020. When and Why PINNs Fail to Train: A Neural Tangent Kernel Perspective,” July.
Wen, Gege, Zongyi Li, Kamyar Azizzadenesheli, Anima Anandkumar, and Sally M. Benson. 2022. U-FNO—An Enhanced Fourier Neural Operator-Based Deep-Learning Model for Multiphase Flow.” Advances in Water Resources 163 (May): 104180.
Xu, Kailai, and Eric Darve. 2019. Adversarial Numerical Analysis for Inverse Problems.” arXiv.
———. 2020. ADCME: Learning Spatially-Varying Physical Fields Using Deep Neural Networks.” In arXiv:2011.11955 [Cs, Math].
Yang, Liu, Xuhui Meng, and George Em Karniadakis. 2021. B-PINNs: Bayesian Physics-Informed Neural Networks for Forward and Inverse PDE Problems with Noisy Data.” Journal of Computational Physics 425 (January): 109913.
Yang, Liu, Dongkun Zhang, and George Em Karniadakis. 2020. Physics-Informed Generative Adversarial Networks for Stochastic Differential Equations.” SIAM Journal on Scientific Computing 42 (1): A292–317.
Yin, Yuan, Matthieu Kirchmeyer, Jean-Yves Franceschi, Alain Rakotomamonjy, and Patrick Gallinari. 2023. Continuous PDE Dynamics Forecasting with Implicit Neural Representations.” arXiv.
Zammit-Mangion, Andrew, Michael Bertolacci, Jenny Fisher, Ann Stavert, Matthew L. Rigby, Yi Cao, and Noel Cressie. 2021. WOMBAT v1.0: A fully Bayesian global flux-inversion framework.” Geoscientific Model Development Discussions, July, 1–51.
Zang, Yaohua, Gang Bao, Xiaojing Ye, and Haomin Zhou. 2020. Weak Adversarial Networks for High-Dimensional Partial Differential Equations.” Journal of Computational Physics 411 (June): 109409.
Zeng, Qi, Spencer H. Bryngelson, and Florian Schäfer. 2022. Competitive Physics Informed Networks.” arXiv.
Zhang, Dongkun, Ling Guo, and George Em Karniadakis. 2020. Learning in Modal Space: Solving Time-Dependent Stochastic PDEs Using Physics-Informed Neural Networks.” SIAM Journal on Scientific Computing 42 (2): A639–65.
Zhang, Dongkun, Lu Lu, Ling Guo, and George Em Karniadakis. 2019. Quantifying Total Uncertainty in Physics-Informed Neural Networks for Solving Forward and Inverse Stochastic Problems.” Journal of Computational Physics 397 (November): 108850.
Zhi, Weiming, Tin Lai, Lionel Ott, Edwin V. Bonilla, and Fabio Ramos. 2022. Learning Efficient and Robust Ordinary Differential Equations via Invertible Neural Networks.” In International Conference on Machine Learning, 27060–74. PMLR.
Zubov, Kirill, Zoe McCarthy, Yingbo Ma, Francesco Calisto, Valerio Pagliarino, Simone Azeglio, Luca Bottero, et al. 2021. NeuralPDE: Automating Physics-Informed Neural Networks (PINNs) with Error Approximations.” arXiv.

1 comment

As a geophysicist working with 3D and lots of data with coupled PDEs, a fast solver is nice, but often intractably slow. Even with modern solvers. Even with GPU. Replacing the solver with a NN approximant is potentially much faster, even if the speed is merely amortized. That has so many benefits for real-world modeling work.
Reply to Jack

GitHub-flavored Markdown & a sane subset of HTML is supported.