ΦFlow

A modern python computational fluid dynamics library for ML research

June 24, 2022 — April 17, 2023

geometry
how do science
machine learning
PDEs
physics
statmech
stochastic processes
surrogate
uncertainty
Figure 1

A useful differentiable PDE solver.

ΦFlow: A differentiable PDE solving framework for machine learning (Holl et al. 2020):

  • Variety of built-in PDE operations with focus on fluid phenomena, allowing for concise formulation of simulations.
  • Tight integration with PyTorch, Jax and TensorFlow for straightforward neural network training with fully differentiable simulations that can run on the GPU.
  • Flexible, easy-to-use web interface featuring live visualizations and interactive controls that can affect simulations or network training on the fly.
  • Object-oriented, vectorized design for expressive code, ease of use, flexibility and extensibility.
  • Reusable simulation code, independent of backend and dimensionality, i.e. the exact same code can run a 2D fluid sim using NumPy and a 3D fluid sim on the GPU using TensorFlow or PyTorch.
  • High-level linear equation solver with automated sparse matrix generation.

Phiflow seems to have less elaborate PDEs built-in than Mantaflow but have more intimate and thus flexible (?) ML integration and more active development. As featured in various papers from the TUM group (Holl, Thuerey, and Koltun 2020; Um and Holl 2021; Um et al. 2021).

It is a lovely package in its way; that way is quirky, hipster and artisanal. The documentation is dispersed and confusing. Crucial facts are scattered across video tutorials, idiosyncratic and rapidly outdated API docs, tutorials, demos and manuals. This is not to disrespect Phillip”s amazing fork on this package. The fact that there is documentation places this product well ahead of many academic research competitors; I’m merely setting expectations.

It reinvents a few wheels while trying to be helpful and there are occasional impedance mismatches between this PDE-first framework and the needs of ML, and a lot of opinionated design choices. Pet peeve: providing a unified API over various toolkits, which makes 80% of PDE tasks easy and the remaining 20% utterly baffling. I’m currently trying to discover how easy it is to stitch together PDEs and NNs manually and propagate gradients between them; in this setting a lot of time is spent working around the convenient generic wrappers to get back to the stuff I actually need.

Most of the documentation on this page is about the Phiflow v2 API, and tested on Phiflow 2.1.4 2.2.7 2.5.1..

1 Documentation central

The Youtube tutorials are useful. The “main” docs site is guide-like. API reference docs are stored separately: phi API documentation.

NB: Now (v2.5) a substantial part of the functionality has been broken out into a generic numerical library, PhiML, and so part of the documentation is filed there.

2 Arrays in Phiflow

A.k.a. Tensors, which are wrappers around the tensor objects in whatever math backend phiflow is using.

Arrays have mandatory names which are used in broadcasting rules. This is best explained in the video tutorial, Working with Tensors. broadcasting between objects with “spatial”, “batch”, “instance” and “channel” dimensions is largely automatic. I guess “temporal” is implied somehow? Maybe via Scene objects? I do not use those.

This system is neat, and flexible, but unfortunately I spend a lot of time working around it because ultimately I need to drop into the specialised mathematical tools of specific NN toolkits so I can get things done. Ultimately I still need to choose an interpretation for my tensors, and so PhiFlow’s attempt is valiant, but ultimately burdensome.

Documentation was previously at phi.math API documentation. Now that mathematical functionality has been broken out into PhiML, so now you should refer to phiml.math API documentation.

3 Fields in Phiflow

A sampling grid plus some sample data at grid locations gives us a SampledField. We could imagine fields that are defined in terms of some general mathematical function. Indeed the documentation references an AnalyticField, but this appears not to be implemented, so we can consider everything to be SampledField for now. CenteredGrid objects are reasonably obvious.

Slightly fancy feature — Grids are not necessarily centred, i.e. summarising the values of a cell by its centre, but may be offset to convey values at the boundaries. This is called the Staggered Grid and is useful for velocity fields in particular.

3.1 Sampling

Fields can be sampled in a few different ways. I am trying to learn which ones are differentiable. Field.at is probably usually what I want?

inf_field = field.at(
    CenteredGrid(0, x=16, y=16, bounds = field.bounds),
    keep_extrapolation=True)

But there are options. This also works:

inf_field = field.CenteredGrid(
    field, x=16, y=16, bounds=field.bounds,
    extrapolation=field.extrapolation)

Sampling overview explains some more.

Various sample methods takes us from a Field to a Tensor. math.downsample2x and math.sample_subgrid address special grid relations. field.sample and field.reduce_sample seem to accept arbitrary geometries (?), which is useful although I am still confused about how to specify useful Geometries.

4 Actual physics

The actual physics part is what phiflow makes easy.

5 Optimisation

There are two types of optimisation supported in ΦFlow, with two different APIs. One is the ΦFlow native optimisation, which optimises Fields and ΦFlow Tensor objects. This copies the scipy.minimize interface.

Another is the NN-style SGD training, which optimises NN parameters. This looks like a normal SGD training loop, as per <insert favourite NN framework here>.

As usual with the scipy.minimize style system, there is not much scope to see what is happening during the optimisation. There is an example showing how to do that better in the Physics-based Deep Learning textbook Burgers Optimization with a Differentiable Physics Gradient, although it uses an outdated record_gradients API. A shorter but more modern example is in the cookbook.

6 ML

The API is idiosyncratic, and, I think, rapidly evolving. Best explained through examples.

7 Visualization

Messy. They created a lovely UI for controlling simulations interactively, but that is messy and unsatisfactory because python GUIs suck and also jupyter GUIs suck, but and trying to serve two sucky masters is tedious.

As per the advice of lead developer Phillip Holl, I ignore the entire Vis system which only works from command-line scripts. I plot inside jupyter notebooks for now. If I wanted something more sophisticated I might use the ΦFlow Web Interface which integrates with dash. I am not sure why they do not just use one of the fairly standard tools for ML experiment tracking and visualisation such as tensorboard or whatever. the developers of those tools have already experienced the many irritations of trying to do this stuff interactively and found workarounds.

8 Efficiency/GPUs

I find it congenial to use the EXCESSIVE KILL IT WITH FIRE method to moderate ΦFlow’s ambition with regard to GPUs. In some versions it is too reluctant to use CUDA and in other too averse. e.g. in 2.2.7 EVEN if I am not using cuda math.seed will still try use the GPU which is rude behaviour on shared machines.

Keep it simple by avoiding ambiguity:

from phi.torch.flow import *
from phi.torch import TORCH

GPU mode:

os.environ.pop("CUDA_VISIBLE_DEVICES", None)  #Dangerous on shared machines!
TORCH.set_default_device('GPU')
PHI_DEVICE = 'GPU'  # pass this to Phiflow functions
DEVICE = torch.device('cuda') # pass this to pytorch functions

CPU mode:

os.environ["CUDA_VISIBLE_DEVICES"]=""
TORCH.set_default_device('CPU')
PHI_DEVICE = 'CPU'  # pass this to Phiflow functions
DEVICE = torch.device('cpu') # pass this to pytorch functions

9 Useful examples

10 Data storage

I do not use the native phiflow system, since I store everything in hdf5. But it is nice that it the API documented:

11 References

Holl, Philipp, Vladlen Koltun, Kiwon Um, and Nils Thuerey. 2020. Phiflow: A Differentiable PDE Solving Framework for Deep Learning via Physical Simulations.” In NeurIPS Workshop.
Holl, Philipp, Nils Thuerey, and Vladlen Koltun. 2020. Learning to Control PDEs with Differentiable Physics.” In ICLR, 5.
Thuerey, Nils, Philipp Holl, Maximilian Mueller, Patrick Schnell, Felix Trost, and Kiwon Um. 2021. Physics-Based Deep Learning. WWW.
Um, Kiwon, Robert Brand, Yun Fei, Philipp Holl, and Nils Thuerey. 2021. Solver-in-the-Loop: Learning from Differentiable Physics to Interact with Iterative PDE-Solvers.” arXiv:2007.00016 [Physics], January.
Um, Kiwon, and Philipp Holl. 2021. “Differentiable Physics for Improving the Accuracy of Iterative PDE-Solvers with Neural Networks.” In, 5.