Differentiable PDE solvers

Method of adjoints etc



Suppose we are keen to devise yet another method that will do clever things to augment PDE solvers with ML somehow. To that end it would be nice to have a PDE solver that was not a completely black box but which we could interrogate for useful gradients. Obviously all PDE solvers use gradient information, but only some of them expose that to us as users; e.g. MODFLOW will give me a solution filed but not the gradients of the field that were used to calculate that gradient. It will definitely not give me tha adjoints so that I can calculate the gradients of an objective function of that field with respect to input parameters. In ML toolkits accessing this information is easy.

TODO: define adjoint method etc.

OTOH, there is a lot of sophisticated work done by PDE solvers that is hard for ML toolkits to recreate. That is why PDE solvers are a thing.

Tools which combine both worlds, PDE solutions and ML optimisations, do exist; there are adjoint method systems for mainstream PDE solvers just as there are PDE solvers for ML frameworks. Let us list some of the options here:

Mantaflow/Phiflow

mantaflow - an extensible framework for fluid simulation:

mantaflow is an open-source framework targeted at fluid simulation research in Computer Graphics and Machine Learning. Its parallelized C++ solver core, python scene definition interface and plugin system allow for quickly prototyping and testing new algorithms. A wide range of Navier-Stokes solver variants are included. It’s very versatile, and allows coupling and import/export with deep learning frameworks (e.g., tensorflow via numpy) or standalone compilation as matlab plugin. Mantaflow also serves as the simulation engine in Blender.

Feature list:

The framework can be used with or without GUI on Linux, MacOS and Windows. Here is an incomplete list of features implemented so far:

  • Eulerian simulation using MAC Grids, PCG pressure solver and MacCormack advection
  • Flexible particle systems
  • FLIP simulations for liquids
  • Surface mesh tracking
  • Free surface simulations with levelsets, fast marching
  • Wavelet and surface turbulence
  • K-epsilon turbulence modeling and synthesis
  • Maya and Blender export for rendering

Mantaflow’s particular selling point is producing stunning 3d animations as an output.

Mantaflow pairs well with tPhiFlow: A differentiable PDE solving framework for machine learning: (Holl and Koltun 2020)

  • Variety of built-in PDE operations with focus on fluid phenomena, allowing for concise formulation of simulations.
  • Tight integration with PyTorch, Jax and TensorFlow for straightforward neural network training with fully differentiable simulations that can run on the GPU.
  • Flexible, easy-to-use web interface featuring live visualizations and interactive controls that can affect simulations or network training on the fly.
  • Object-oriented, vectorized design for expressive code, ease of use, flexibility and extensibility.
  • Reusable simulation code, independent of backend and dimensionality, i.e. the exact same code can run a 2D fluid sim using NumPy and a 3D fluid sim on the GPU using TensorFlow or PyTorch.
  • High-level linear equation solver with automated sparse matrix generation.

Phiflow seems to have less elaborate PDEs built-in than Mantaflow but have deeper (?)/more flexible (?) ML integration and more active development (?). As seen in various papers from this group (Holl, Thuerey, and Koltun 2020; Um and Holl, n.d.; Um et al. 2021).

jax-cfd

Unremarkable name, looks very handy though.

DeepXDE

DeepXDE is the reference solver implementation for PINN and DeepONet. (Lu, Mao, and Meng 2019)

Use DeepXDE if you need a deep learning library that

  • solves forward and inverse partial differential equations (PDEs) via physics-informed neural network (PINN),
  • solves forward and inverse integro-differential equations (IDEs) via PINN,
  • solves forward and inverse fractional partial differential equations (fPDEs) via fractional PINN (fPINN),
  • approximates functions from multi-fidelity data via multi-fidelity NN (MFNN),
  • approximates nonlinear operators via deep operator network (DeepONet),
  • approximates functions from a dataset with/without constraints.

You might need to moderate your expectations a little. I did, after that bold description. This is an impressive library, but as covered above, some of the types of problems that it can solve are more limited than one might hope upon reading the description. Think of it as a neural network library that handles certain PDE calculations and you will not go too far astray.

JuliaFEM

Julaifem is an umbrella organisation supporting julia-backed FEM solvers. The documentation is tricksy, but check out the examples, Supported solvers listed here. I assume these are all differentiable since that is a selling point of the SciML.jl ecosystem they spring from.

ADCME

ADCME is suitable for conducting inverse modeling in scientific computing; specifically, ADCME targets physics informed machine learning, which leverages machine learning techniques to solve challenging scientific computing problems. The purpose of the package is to:

  1. provide differentiable programming framework for scientific computing based on TensorFlow automatic differentiation (AD) backend;
  2. adapt syntax to facilitate implementing scientific computing, particularly for numerical PDE discretization schemes;
  3. supply missing functionalities in the backend (TensorFlow) that are important for engineering, such as sparse linear algebra, constrained optimization, etc.

Applications include

  • physics informed machine learning (a.k.a., scientific machine learning, physics informed learning, etc.)
  • coupled hydrological and full waveform inversion
  • constitutive modeling in solid mechanics
  • learning hidden geophysical dynamics
  • parameter estimation in stochastic processes

The package inherits the scalability and efficiency from the well-optimized backend TensorFlow. Meanwhile, it provides access to incorporate existing C/C++ codes via the custom operators. For example, some functionalities for sparse matrices are implemented in this way and serve as extendable “plugins” for ADCME.

FEniCS

Also seems to be a friendly PDE solver, lacking in GPU support. However, it does have an interface to pytorch, barkm/torch-fenics on the CPU to provide differentiability with respect to parameters.

dolfin-adjoint

dolfin-adjoint (Mitusch, Funke, and Dokken 2019):

The dolfin-adjoint project automatically derives the discrete adjoint and tangent linear models from a forward model written in the Python interface to FEniCS and Firedrake

These adjoint and tangent linear models are key ingredients in many important algorithms, such as data assimilation, optimal control, sensitivity analysis, design optimisation, and error estimation. Such models have made an enormous impact in fields such as meteorology and oceanography, but their use in other scientific fields has been hampered by the great practical difficulty of their derivation and implementation. In his recent book Naumann (2011) states that

[T]he automatic generation of optimal (in terms of robustness and efficiency) adjoint versions of large-scale simulation code is one of the great open challenges in the field of High-Performance Scientific Computing.

The dolfin-adjoint project aims to solve this problem for the case where the model is implemented in the Python interface to FEniCS/Firedrake.

This provides the AD backend to torch-fenics.

TenFEM

TenFEM offers a small selection of differentiable FEM solvers fpr Tensorflow.

Trixi

Trixi.jl

Trixi.jl is a numerical simulation framework for hyperbolic conservation laws written in Julia. A key objective for the framework is to be useful to both scientists and students. Therefore, next to having an extensible design with a fast implementation, Trixi is focused on being easy to use for new or inexperienced users, including the installation and postprocessing procedures.

taichi

“Sparse simulator” Tai Chi (Hu et al. 2019) is presumably also able to solve PDEs? 🤷🏼‍♂️ If so that would be nifty because it is also differentiable. I suspect it is more of a graph network approach.

References

Holl, Philipp, and Vladlen Koltun. 2020. “Phiflow: A Differentiable PDE Solving Framework for Deep Learning via Physical Simulations,” 5. http://montrealrobotics.ca/diffcvgp/assets/papers/3.pdf.
Holl, Philipp, Nils Thuerey, and Vladlen Koltun. 2020. “Learning to Control PDEs with Differentiable Physics.” In, 5. http://arxiv.org/abs/2001.07457.
Hu, Yuanming, Tzu-Mao Li, Luke Anderson, Jonathan Ragan-Kelley, and Frédo Durand. 2019. “Taichi: A Language for High-Performance Computation on Spatially Sparse Data Structures.” ACM Transactions on Graphics 38 (6): 1–16. https://doi.org/10.1145/3355089.3356506.
Kochkov, Dmitrii, Jamie A. Smith, Ayya Alieva, Qing Wang, Michael P. Brenner, and Stephan Hoyer. 2021. “Machine Learning–Accelerated Computational Fluid Dynamics.” Proceedings of the National Academy of Sciences 118 (21). https://doi.org/10.1073/pnas.2101784118.
Lu, Lu, Zhiping Mao, and Xuhui Meng. 2019. “DeepXDE: A Deep Learning Library for Solving Differential Equations.” In, 6. http://arxiv.org/abs/1907.04502.
Mitusch, Sebastian K., Simon W. Funke, and Jørgen S. Dokken. 2019. “Dolfin-Adjoint 2018.1: Automated Adjoints for FEniCS and Firedrake.” Journal of Open Source Software 4 (38): 1292. https://doi.org/10.21105/joss.01292.
Naumann, Uwe. 2011. The Art of Differentiating Computer Programs: An Introduction to Algorithmic Differentiation. Society for Industrial and Applied Mathematics. https://doi.org/10.1137/1.9781611972078.
Um, Kiwon, Robert Brand, Yun Fei, Philipp Holl, and Nils Thuerey. 2021. “Solver-in-the-Loop: Learning from Differentiable Physics to Interact with Iterative PDE-Solvers.” arXiv:2007.00016 [Physics], January. http://arxiv.org/abs/2007.00016.
Um, Kiwon, and Philipp Holl. n.d. “Differentiable Physics for Improving the Accuracy of Iterative PDE-Solvers with Neural Networks.” In, 5.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.