Doing inference where the probability metric measuring discrepancy between some target distribution and the implied inferential distribution is an optimal-transport one. Frequently intractable, but neat when we can get it. Sometimes we might get there by estimating the (gradients of) an actual OT loss, or even the transport maps implying that loss.

Placeholder/grab bag.

TODO: should we break this into discrete-state and continuous-state cases? Machinery looks different.

## NNs

Wasserstein GANs and OT Gans (Salimans et al. 2018) are argued to do an approximate optimal transport inference, indirectly.

## Surprise connection to matrix factorisation

Non-negative matrix factorisation via OT is a thing, e.g. in topic modeling (Huynh, Zhao, and Phung 2020; Zhao et al. 2020).

## Via Fisher distance

See e.g. (J. H. Huggins et al. 2018b, 2018a) for a particular Bayes posterior approximation using OT and fisher distance.

## Minibatched

Daniel Daza in Approximating Wasserstein distances with PyTorch touches upon Fatras et al. (2020):

Optimal transport distances are powerful tools to compare probability distributions and have found many applications in machine learning. Yet their algorithmic complexity prevents their direct use on large scale datasets. To overcome this challenge, practitioners compute these distances on minibatches i.e., they average the outcome of several smaller optimal transport problems. We propose in this paper an analysis of this practice, which effects are not well understood so far. We notably argue that it is equivalent to an implicit regularization of the original problem, with appealing properties such as unbiased estimators, gradients and a concentration bound around the expectation, but also with defects such as loss of distance property.

## Linearized embedding

Noted in Bai et al. (2023) via Cheng-Soon Ong:

Comparing K (probability) measures requires the pairwise calculation of transport-based distances, which, despite the significant recent computational speed-ups, remains to be relatively expensive. To address this problem, W. Wang et al. (2013) proposed the Linear Optimal Transport (LOT) framework, which linearizes the 2-Wasserstein distance utilizing its weak Riemannian structure. In short, the probability measures are embedded into the tangent space at a fixed reference measure (e.g., the measures’ Wasserstein barycenter) through a logarithmic map. The Euclidean distances between the embedded measures then approximate the 2-Wasserstein distance between the probability measures. The LOT framework is computationally attractive as it only requires the computation of one optimal transport problem per input measure, reducing the otherwise quadratic cost to linear. Moreover, the framework provides theoretical guarantees on convexifying certain sets of probability measures […], which is critical in supervised and unsupervised learning from sets of probability measures.

## Tools

### OTT

Optimal Transport Tools (OTT) (Cuturi et al. 2022), a toolbox for all things Wasserstein (documentation):

The goal of OTT is to provide sturdy, versatile and efficient optimal transport solvers, taking advantage of JAX features, such as JIT, auto-vectorization and implicit differentiation.

A typical OT problem has two ingredients: a pair of weight vectors

`a`

and`b`

(one for each measure), with a ground cost matrix that is either directly given, or derived as the pairwise evaluation of a cost function on pairs of points taken from two measures. The main design choice in OTT comes from encapsulating the cost in a`Geometry`

object, and [bundling] it with a few useful operations (notably kernel applications). The most common geometry is that of two clouds of vectors compared with the squared Euclidean distance, as illustrated in the example below:

```
import jax
import jax.numpy as jnp
from ott.tools import transport
# Samples two point clouds and their weights.
rngs = jax.random.split(jax.random.PRNGKey(0),4)
n, m, d = 12, 14, 2
x = jax.random.normal(rngs[0], (n,d)) + 1
y = jax.random.uniform(rngs[1], (m,d))
a = jax.random.uniform(rngs[2], (n,))
b = jax.random.uniform(rngs[3], (m,))
a, b = a / jnp.sum(a), b / jnp.sum(b)
# Computes the couplings via Sinkhorn algorithm.
ot = transport.solve(x, y, a=a, b=b)
P = ot.matrix
```

The call to

`sinkhorn`

above works out the optimal transport solution by storing its output. The transport matrix can be instantiated using those optimal solutions and the`Geometry`

again. That transport matrix links each point from the first point cloud to one or more points from the second, as illustrated below.To be more precise, the

`sinkhorn`

algorithm operates on the`Geometry`

, taking into account weights`a`

and`b`

, to solve the OT problem, produce a named tuple that contains two optimal dual potentials`f`

and`g`

(vectors of the same size as`a`

and`b`

), the objective`reg_ot_cost`

and a log of the`errors`

of the algorithm as it converges, and a`converged`

flag.

### POT

POT: Python Optimal Transport (Rémi Flamary et al. 2021)

This open source Python library provide several solvers for optimization problems related to Optimal Transport for signal, image processing and machine learning.

Website and documentation: https://PythonOT.github.io/

Source Code (MIT): https://github.com/PythonOT/POT

POT provides the following generic OT solvers (links to examples):

- OT Network Simplex solver for the linear program/ Earth Movers Distance .
- Conditional gradient and Generalized conditional gradient for regularized OT .
- Entropic regularization OT solver with Sinkhorn Knopp Algorithm , stabilized version , greedy Sinkhorn and Screening Sinkhorn.
- Bregman projections for Wasserstein barycenter , convolutional barycenter and unmixing .
- Sinkhorn divergence and entropic regularization OT from empirical data.
- Debiased Sinkhorn barycenters Sinkhorn divergence barycenter
- Smooth optimal transport solvers (dual and semi-dual) for KL and squared L2 regularizations .
- Weak OT solver between empirical distributions
- Non regularized Wasserstein barycenters with LP solver (only small scale).
- Gromov-Wasserstein distances and GW barycenters (exact and regularized ), differentiable using gradients from Graph Dictionary Learning
- Fused-Gromov-Wasserstein distances solver and FGW barycenters
- Stochastic solver and differentiable losses for Large-scale Optimal Transport (semi-dual problem and dual problem )
- Sampled solver of Gromov Wasserstein for large-scale problem with any loss functions
- Non regularized free support Wasserstein barycenters .
- One dimensional Unbalanced OT with KL relaxation and barycenter \[10, 25\]. Also exact unbalanced OT with KL and quadratic regularization and the regularization path of UOT
- Partial Wasserstein and Gromov-Wasserstein (exact and entropic formulations).
- Sliced Wasserstein \[31, 32\] and Max-sliced Wasserstein that can be used for gradient flows .
- Graph Dictionary Learning solvers .
- Several backends for easy use of POT with Pytorch/jax/Numpy/Cupy/Tensorflow arrays.
POT provides the following Machine Learning related solvers:

- Optimal transport for domain adaptation with group lasso regularization, Laplacian regularization and semi supervised setting.
- Linear OT mapping and Joint OT mapping estimation .
- Wasserstein Discriminant Analysis (requires autograd + pymanopt).
- JCPOT algorithm for multi-source domain adaptation with target shift .
Some other examples are available in the documentation.

### GeomLoss

The

GeomLosslibrary provides efficient GPU implementations for:

- Kernel norms (also known as Maximum Mean Discrepancies).
- Hausdorff divergences, which are positive definite generalizations of the Chamfer-ICP loss and are analogous to
log-likelihoodsof Gaussian Mixture Models.- Debiased Sinkhorn divergences, which are affordable yet
positive and definiteapproximations of Optimal Transport (Wasserstein) distances.It is hosted on GitHub and distributed under the permissive MIT license.

GeomLoss functions are available through the custom PyTorch layers

`SamplesLoss`

,`ImagesLoss`

and`VolumesLoss`

which allow you to work with weightedpoint clouds(of any dimension),density mapsandvolumetric segmentation masks.

## Incoming

Rigollet and Weed (2018):

We give a statistical interpretation of entropic optimal transport by showing that performing maximum-likelihood estimation for Gaussian deconvolution corresponds to calculating a projection with respect to the entropic optimal transport distance.

Thomas Viehmann, An efficient implementation of the Sinkhorn algorithm for the GPU is a Pytorch CUDA extension (Viehmann 2019)

Marco Cuturi’s course notes on OT include a 400 page slide deck.

## References

*SIAM Journal on Mathematical Analysis*43 (2): 904–24.

*Advances in Neural Information Processing Systems*32.

*Proceedings of the 32Nd International Conference on Neural Information Processing Systems*, 2478–87. NIPS’18. USA: Curran Associates Inc.

*Gradient Flows: In Metric Spaces and in the Space of Probability Measures*. 2nd ed. Lectures in Mathematics. ETH Zürich. Birkhäuser Basel.

*SIAM Journal on Mathematical Analysis*35 (1): 61–97.

*International Conference on Machine Learning*, 214–23.

*arXiv:1703.00573 [Cs]*, March.

*arXiv:1805.00753 [Stat]*, April.

*Acta Numerica*30 (May): 249–325.

*arXiv:1412.5154 [Math]*, December.

*UAI18*.

*IFAC Proceedings Volumes*, 19th IFAC World Congress, 47 (3): 8662–68.

*arXiv:1802.04885 [Stat]*, February.

*arXiv:1810.07717 [Cs]*, October.

*arXiv:1610.05627 [Math, Stat]*, October.

*arXiv:1906.01614 [Math, Stat]*, June.

*AISTATS 2018*.

*Electronic Journal of Probability*16 (none).

*arXiv:1209.1077 [Cs, Stat]*, September.

*arXiv:1607.05816 [Math]*, May.

*ICML*.

*arXiv:2102.07850 [Cs, Stat]*, June.

*arXiv:1507.00504 [Cs]*, June.

*Advances in Neural Information Processing Systems 26*.

*International Conference on Machine Learning*, 685–93. PMLR.

*Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics*, 2131–41. PMLR.

*von Mises calculus for statistical functionals*. Lecture Notes in Statistics 19. New York: Springer.

*Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics*, 2681–90. PMLR.

*Journal of Machine Learning Research*22 (78): 1–8.

*Machine Learning*107 (12): 1923–45.

*Advances in Neural Information Processing Systems 28*, edited by C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, 2053–61. Curran Associates, Inc.

*SIAM Journal on Applied Dynamical Systems*19 (1): 412–41.

*Advances in Neural Information Processing Systems 29*, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 3432–40. Curran Associates, Inc.

*arXiv:1706.00292 [Stat]*, October.

*Advances in Neural Information Processing Systems 27*, edited by Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, 2672–80. NIPS’14. Cambridge, MA, USA: Curran Associates, Inc.

*arXiv:1003.3852 [Math]*, March.

*arXiv:1704.00028 [Cs, Stat]*, March.

*arXiv:1705.07164 [Cs, Stat]*, May.

*arXiv:1806.10234 [Cs, Stat]*, June.

*arXiv:1809.09505 [Cs, Math, Stat]*, September.

*arXiv:1910.04102 [Cs, Math, Stat]*, October.

*Advances in Neural Information Processing Systems 30*, edited by I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, 3611–21. Curran Associates, Inc.

*Advances in Neural Information Processing Systems*, 33:18573–82. Curran Associates, Inc.

*Information Geometry*, June.

*Discrete & Continuous Dynamical Systems - A*34 (4): 1533.

*International Conference on Machine Learning*, 3159–68.

*Advances In Neural Information Processing Systems*.

*PMLR*, 2218–27.

*arXiv:1906.03317 [Cs, Math, Stat]*, June.

*Information Geometry*, August.

*Handbook of Uncertainty Quantification*, edited by Roger Ghanem, David Higdon, and Houman Owhadi, 1:1–41. Cham: Springer Heidelberg.

*SIAM/ASA Journal on Uncertainty Quantification*, February, 96–124.

*Mathematical Programming*171 (1): 115–66.

*Advances in Neural Information Processing Systems 29*, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 3711–19. Curran Associates, Inc.

*Annual Review of Statistics and Its Application*6 (1): 405–31.

*Computational Optimal Transport*. Vol. 11.

*International Conference on Machine Learning*, 2664–72. PMLR.

*Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence*, 1960–66. IJCAI’16. New York, New York, USA: AAAI Press.

*The 22nd International Conference on Artificial Intelligence and Statistics*, 849–58. PMLR.

*International Conference on Machine Learning*, 1530–38. ICML’15. Lille, France: JMLR.org.

*Stat*10 (1): e329.

*Geophysical Journal International*231 (1): 172–98.

*Optimal Transport for Applied Mathematicians*. Edited by Filippo Santambrogio. Progress in Nonlinear Differential Equations and Their Applications. Cham: Springer International Publishing.

*Proceedings of the 38th International Conference on Machine Learning*, 9344–54. PMLR.

*SIAM Journal on Imaging Sciences*11 (1): 643–78.

*arXiv:1610.06519 [Cs, Math]*, February.

*ACM Transactions on Graphics*34 (4): 66:1–11.

*Journal of Machine Learning Research*19 (66): 2639–709.

*IEEE Transactions on Automatic Control*66 (7): 3052–67.

*Electronic Journal of Statistics*13 (2): 5088–5119.

*Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)*, 284–94. Minneapolis, Minnesota: Association for Computational Linguistics.

*International Journal of Computer Vision*101 (2): 254–69.

*Proceedings of NeurIPS 2020*.

*IEEE Transactions on Information Theory*66 (11): 7155–79.

## No comments yet. Why not leave one?