Neural PDE operator learning on domains with interesting geometry
Can ML solve PDEs on non-Euclidean domains?
2019-10-15 — 2025-03-07
Wherein implicit diffeomorphisms are learned to map Euclidean grids onto irregular manifolds, and neural operators are adapted to mask and represent partial differential equations on such domains.
Placeholder for a discussion of how to apply machine learning to partial differential equations on interesting weirdly-shaped domains, by which I mean things that are not toruses, spheres, or hyper-rectangles.
Why does this even need saying?, you might ask. Surely most PDEs are solved on interestingly-shaped objects? That would indicate that you come from a PDE background. On the ML side, the ascendancy of neural nets means that per default everything is on non-challenging geometries. The field has seen early success on rasterised grids (and spheres and toruses) which are easy to analysis and convenient to compute efficiently. The next step is to generalise to more interesting geometries and consider the implications of these generalisations. That is what this notebook is about.
1 Mapping Euclidean methods to non-Euclidean domains
This seems to be mostly the implicit representation to learn diffeomorphisms to map between a euclidean grid and a non-euclidean domain.
2 Masking
TODO
3 Transformers
TODO
4 Graph neural operators
TODO
5 DeepONets
TODO
