Optimal transport inference

I feel the earth mover under my feet, I feel the ψ tumbling down, I feel my heart start to trembling, Whenever you’re around my empirical density in minimal transport cost



Doing inference where the probability metric measuring discrepancy between some target distribution and the implied inferential distribution is an optimal-transport one. Frequently intractable, but neat when we can get it.

Wasserstein GANs are argued to do an approximate optimal transport inference, indirectly. See e.g. (J. H. Huggins et al. 2018b, 2018a) for a particular Bayes posterior approximation using OT.

Tools

OTT

Optimal Transport Tools (OTT), a toolbox for all things Wasserstein (documentation):

The goal of OTT is to provide sturdy, versatile and efficient optimal transport solvers, taking advantage of JAX features, such as JIT, auto-vectorization and implicit differentiation.

A typical OT problem has two ingredients: a pair of weight vectors a and b (one for each measure), with a ground cost matrix that is either directly given, or derived as the pairwise evaluation of a cost function on pairs of points taken from two measures. The main design choice in OTT comes from encapsulating the cost in a Geometry object, and bundle it with a few useful operations (notably kernel applications). The most common geometry is that of two clouds of vectors compared with the squared Euclidean distance, as illustrated in the example below:

A self-contained example of this in action:

import jax
import jax.numpy as jnp
from ott.tools import transport
# Samples two point clouds and their weights.
rngs = jax.random.split(jax.random.PRNGKey(0),4)
n, m, d = 12, 14, 2
x = jax.random.normal(rngs[0], (n,d)) + 1
y = jax.random.uniform(rngs[1], (m,d))
a = jax.random.uniform(rngs[2], (n,))
b = jax.random.uniform(rngs[3], (m,))
a, b = a / jnp.sum(a), b / jnp.sum(b)
# Computes the couplings via Sinkhorn algorithm.
ot = transport.solve(x, y, a=a, b=b)
P = ot.matrix

The call to sinkhorn above works out the optimal transport solution by storing its output. The transport matrix can be instantiated using those optimal solutions and the Geometry again. That transport matrix links each point from the first point cloud to one or more points from the second, as illustrated below.

obtained coupling

To be more precise, the sinkhorn algorithm operates on the Geometry, taking into account weights a and b, to solve the OT problem, produce a named tuple that contains two optimal dual potentials f and g (vectors of the same size as a and b), the objective reg_ot_cost and a log of the errors of the algorithm as it converges, and a converged flag.

Incoming

  • Rigollet and Weed (2018): >We give a statistical interpretation of entropic optimal trans port by showing that performing maximum-likelihood estimation for Gaussian deconvolution corresponds to calculating a projection with respect to the entropic optimal transport distance.

References

Agueh, Martial, and Guillaume Carlier. 2011. Barycenters in the Wasserstein Space.” SIAM Journal on Mathematical Analysis 43 (2): 904–24.
Alaya, Mokhtar Z., Maxime Berar, Gilles Gasso, and Alain Rakotomamonjy. 2019. Screening Sinkhorn Algorithm for Regularized Optimal Transport.” Advances in Neural Information Processing Systems 32.
Altschuler, Jason, Jonathan Niles-Weed, and Philippe Rigollet. n.d. “Near-Linear Time Approximation Algorithms for Optimal Transport via Sinkhorn Iteration,” 11.
Ambrogioni, Luca, Umut Guclu, and Marcel van Gerven. 2018. Wasserstein Variational Gradient Descent: From Semi-Discrete Optimal Transport to Ensemble Variational Inference.” arXiv:1811.02827 [Cs, Stat], November.
Ambrogioni, Luca, Umut Güçlü, Yagmur Güçlütürk, Max Hinne, Eric Maris, and Marcel A. J. van Gerven. 2018. Wasserstein Variational Inference.” In Proceedings of the 32Nd International Conference on Neural Information Processing Systems, 2478–87. NIPS’18. USA: Curran Associates Inc.
Ambrosio, Luigi, Nicola Gigli, and Giuseppe Savare. 2008. Gradient Flows: In Metric Spaces and in the Space of Probability Measures. 2nd ed. Lectures in Mathematics. ETH Zürich. Birkhäuser Basel.
Angenent, Sigurd, Steven Haker, and Allen Tannenbaum. 2003. Minimizing Flows for the Monge–Kantorovich Problem.” SIAM Journal on Mathematical Analysis 35 (1): 61–97.
Arjovsky, Martin, Soumith Chintala, and Léon Bottou. 2017. Wasserstein Generative Adversarial Networks.” In International Conference on Machine Learning, 214–23.
Arora, Sanjeev, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. 2017. Generalization and Equilibrium in Generative Adversarial Nets (GANs).” arXiv:1703.00573 [Cs], March.
Bachoc, Francois, Alexandra Suvorikova, David Ginsbourger, Jean-Michel Loubes, and Vladimir Spokoiny. 2019. Gaussian Processes with Multidimensional Distribution Inputs via Optimal Transport and Hilbertian Embedding.” arXiv:1805.00753 [Stat], April.
Benamou, Jean-David. 2021. Optimal Transportation, Modelling and Numerical Simulation.” Acta Numerica 30 (May): 249–325.
Benamou, Jean-David, Guillaume Carlier, Marco Cuturi, Luca Nenna, and Gabriel Peyré. 2014. Iterative Bregman Projections for Regularized Transportation Problems.” arXiv:1412.5154 [Math], December.
Berg, Rianne van den, Leonard Hasenclever, Jakub M. Tomczak, and Max Welling. 2018. Sylvester Normalizing Flows for Variational Inference.” In UAI18.
Bishop, Adrian N., and Arnaud Doucet. 2014. Distributed Nonlinear Consensus in the Space of Probability Measures.” IFAC Proceedings Volumes, 19th IFAC World Congress, 47 (3): 8662–68.
Blanchet, Jose, Lin Chen, and Xun Yu Zhou. 2018. Distributionally Robust Mean-Variance Portfolio Selection with Wasserstein Distances.” arXiv:1802.04885 [Stat], February.
Blanchet, Jose, Arun Jambulapati, Carson Kent, and Aaron Sidford. 2018. Towards Optimal Running Times for Optimal Transport.” arXiv:1810.07717 [Cs], October.
Blanchet, Jose, Yang Kang, and Karthyek Murthy. 2016. Robust Wasserstein Profile Inference and Applications to Machine Learning.” arXiv:1610.05627 [Math, Stat], October.
Blanchet, Jose, Karthyek Murthy, and Nian Si. 2019. Confidence Regions in Wasserstein Distributionally Robust Estimation.” arXiv:1906.01614 [Math, Stat], June.
Blanchet, Jose, Karthyek Murthy, and Fan Zhang. 2018. Optimal Transport Based Distributionally Robust Optimization: Structural Properties and Iterative Schemes.” arXiv:1810.02403 [Math], October.
Blondel, Mathieu, Vivien Seguy, and Antoine Rolet. 2018. Smooth and Sparse Optimal Transport.” In AISTATS 2018.
Boissard, Emmanuel. 2011. Simple Bounds for the Convergence of Empirical and Occupation Measures in 1-Wasserstein Distance.” Electronic Journal of Probability 16 (none).
Bonneel, Nicolas. n.d. “Displacement Interpolation Using Lagrangian Mass Transport,” 11.
Canas, Guillermo D., and Lorenzo Rosasco. 2012. Learning Probability Measures with Respect to Optimal Transport Metrics.” arXiv:1209.1077 [Cs, Stat], September.
Carlier, Guillaume, Marco Cuturi, Brendan Pass, and Carola Schoenlieb. 2017. “Optimal Transport Meets Probability, Statistics and Machine Learning,” 9.
Chizat, Lenaic, Gabriel Peyré, Bernhard Schmitzer, and François-Xavier Vialard. 2017. Scaling Algorithms for Unbalanced Transport Problems.” arXiv:1607.05816 [Math], May.
Chu, Casey, Jose Blanchet, and Peter Glynn. 2019. Probability Functional Descent: A Unifying Perspective on GANs, Variational Inference, and Reinforcement Learning.” In ICML.
Corenflos, Adrien, James Thornton, George Deligiannidis, and Arnaud Doucet. 2021. Differentiable Particle Filtering via Entropy-Regularized Optimal Transport.” arXiv:2102.07850 [Cs, Stat], June.
Coscia, Michele. 2020. “Generalized Euclidean Measure to Estimate Network Distances,” 11.
Courty, Nicolas, Rémi Flamary, Devis Tuia, and Alain Rakotomamonjy. 2016. Optimal Transport for Domain Adaptation.” arXiv:1507.00504 [Cs], June.
Cuturi, Marco. 2013. Sinkhorn Distances: Lightspeed Computation of Optimal Transportation Distances.” In Advances in Neural Information Processing Systems 26.
Cuturi, Marco, and Arnaud Doucet. 2014. Fast Computation of Wasserstein Barycenters.” In International Conference on Machine Learning, 685–93. PMLR.
Fernholz, Luisa Turrin. 1983. von Mises calculus for statistical functionals. Lecture Notes in Statistics 19. New York: Springer.
Flamary, Rémi, Marco Cuturi, Nicolas Courty, and Alain Rakotomamonjy. 2018. Wasserstein Discriminant Analysis.” Machine Learning 107 (12): 1923–45.
Flamary, Remi, Alain Rakotomamonjy, Nicolas Courty, and Devis Tuia. n.d. “Optimal Transport with Laplacian Regularization,” 10.
Frogner, Charlie, Chiyuan Zhang, Hossein Mobahi, Mauricio Araya, and Tomaso A Poggio. 2015. Learning with a Wasserstein Loss.” In Advances in Neural Information Processing Systems 28, edited by C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, 2053–61. Curran Associates, Inc.
Gao, Rui, and Anton J. Kleywegt. 2016. Distributionally Robust Stochastic Optimization with Wasserstein Distance.” arXiv:1604.02199 [Math], April.
Garbuno-Inigo, Alfredo, Franca Hoffmann, Wuchen Li, and Andrew M. Stuart. 2020. Interacting Langevin Diffusions: Gradient Structure and Ensemble Kalman Sampler.” SIAM Journal on Applied Dynamical Systems 19 (1): 412–41.
Genevay, Aude, Marco Cuturi, Gabriel Peyré, and Francis Bach. 2016. Stochastic Optimization for Large-Scale Optimal Transport.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 3432–40. Curran Associates, Inc.
Genevay, Aude, Gabriel Peyré, and Marco Cuturi. 2017. Learning Generative Models with Sinkhorn Divergences.” arXiv:1706.00292 [Stat], October.
Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets.” In Advances in Neural Information Processing Systems 27, edited by Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, 2672–80. NIPS’14. Cambridge, MA, USA: Curran Associates, Inc.
Gozlan, Nathael, and Christian Léonard. 2010. Transport Inequalities. A Survey.” arXiv:1003.3852 [Math], March.
Gulrajani, Ishaan, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. 2017. Improved Training of Wasserstein GANs.” arXiv:1704.00028 [Cs, Stat], March.
Guo, Xin, Johnny Hong, Tianyi Lin, and Nan Yang. 2017. Relaxed Wasserstein with Applications to GANs.” arXiv:1705.07164 [Cs, Stat], May.
Huggins, Jonathan H., Trevor Campbell, Mikołaj Kasprzak, and Tamara Broderick. 2018a. Scalable Gaussian Process Inference with Finite-Data Mean and Variance Guarantees.” arXiv:1806.10234 [Cs, Stat], June.
———. 2018b. Practical Bounds on the Error of Bayesian Posterior Approximations: A Nonasymptotic Approach.” arXiv:1809.09505 [Cs, Math, Stat], September.
Huggins, Jonathan H., Mikołaj Kasprzak, Trevor Campbell, and Tamara Broderick. 2019. Practical Posterior Error Bounds from Variational Objectives.” arXiv:1910.04102 [Cs, Math, Stat], October.
Huggins, Jonathan, Ryan P Adams, and Tamara Broderick. 2017. PASS-GLM: Polynomial Approximate Sufficient Statistics for Scalable Bayesian GLM Inference.” In Advances in Neural Information Processing Systems 30, edited by I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, 3611–21. Curran Associates, Inc.
Khan, Gabriel, and Jun Zhang. 2022. When Optimal Transport Meets Information Geometry.” Information Geometry, June.
Kim, Jin W., and Prashant G. Mehta. 2019. An Optimal Control Derivation of Nonlinear Smoothing Equations,” April.
Léonard, Christian. 2014. A Survey of the Schrödinger Problem and Some of Its Connections with Optimal Transport.” Discrete & Continuous Dynamical Systems - A 34 (4): 1533.
Liu, Huidong, Xianfeng Gu, and Dimitris Samaras. 2018. A Two-Step Computation of the Exact GAN Wasserstein Distance.” In International Conference on Machine Learning, 3159–68.
Liu, Qiang, and Dilin Wang. 2019. Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm.” In Advances In Neural Information Processing Systems.
Louizos, Christos, and Max Welling. 2017. Multiplicative Normalizing Flows for Variational Bayesian Neural Networks.” In PMLR, 2218–27.
Mahdian, Saied, Jose Blanchet, and Peter Glynn. 2019. Optimal Transport Relaxations with Application to Wasserstein GANs.” arXiv:1906.03317 [Cs, Math, Stat], June.
Mallasto, Anton, Augusto Gerolin, and Hà Quang Minh. 2021. Entropy-Regularized 2-Wasserstein Distance Between Gaussian Measures.” Information Geometry, August.
Marzouk, Youssef, Tarek Moselhy, Matthew Parno, and Alessio Spantini. 2016. Sampling via Measure Transport: An Introduction.” In Handbook of Uncertainty Quantification, edited by Roger Ghanem, David Higdon, and Houman Owhadi, 1–41. Cham: Springer International Publishing.
Maurya, Abhinav. 2018. “Optimal Transport in Statistical Machine Learning : Selected Review and Some Open Questions.” In.
Minh, Hà Quang. 2022. Finite Sample Approximations of Exact and Entropic Wasserstein Distances Between Covariance Operators and Gaussian Processes.” SIAM/ASA Journal on Uncertainty Quantification, February, 96–124.
Mohajerin Esfahani, Peyman, and Daniel Kuhn. 2018. Data-Driven Distributionally Robust Optimization Using the Wasserstein Metric: Performance Guarantees and Tractable Reformulations.” Mathematical Programming 171 (1): 115–66.
Montavon, Grégoire, Klaus-Robert Müller, and Marco Cuturi. 2016. Wasserstein Training of Restricted Boltzmann Machines.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 3711–19. Curran Associates, Inc.
Ostrovski, Georg, Will Dabney, and Remi Munos. n.d. “Autoregressive Quantile Networks for Generative Modeling,” 10.
Panaretos, Victor M., and Yoav Zemel. 2019. Statistical Aspects of Wasserstein Distances.” Annual Review of Statistics and Its Application 6 (1): 405–31.
Perrot, Michaël, Nicolas Courty, Rémi Flamary, and Amaury Habrard. n.d. “Mapping Estimation for Discrete Optimal Transport,” 9.
Peyré, Gabriel, and Marco Cuturi. 2019. Computational Optimal Transport. Vol. 11.
Peyré, Gabriel, Marco Cuturi, and Justin Solomon. 2016. Gromov-Wasserstein Averaging of Kernel and Distance Matrices.” In International Conference on Machine Learning, 2664–72. PMLR.
Redko, Ievgen, Nicolas Courty, Rémi Flamary, and Devis Tuia. 2019. Optimal Transport for Multi-Source Domain Adaptation Under Target Shift.” In The 22nd International Conference on Artificial Intelligence and Statistics, 849–58. PMLR.
Rezende, Danilo Jimenez, and Shakir Mohamed. 2015. Variational Inference with Normalizing Flows.” In International Conference on Machine Learning, 1530–38. ICML’15. Lille, France: JMLR.org.
Rigollet, Philippe, and Jonathan Weed. 2018. Entropic Optimal Transport Is Maximum-Likelihood Deconvolution.” arXiv.
Rustamov, Raif M. 2019. Closed-Form Expressions for Maximum Mean Discrepancy with Applications to Wasserstein Auto-Encoders.” arXiv:1901.03227 [Cs, Stat], January.
Santambrogio, Filippo. 2015. Optimal Transport for Applied Mathematicians. Edited by Filippo Santambrogio. Progress in Nonlinear Differential Equations and Their Applications. Cham: Springer International Publishing.
Schmitzer, Bernhard. 2019. Stabilized Sparse Scaling Algorithms for Entropy Regularized Transport Problems.” arXiv:1610.06519 [Cs, Math], February.
Solomon, Justin, Fernando de Goes, Gabriel Peyré, Marco Cuturi, Adrian Butscher, Andy Nguyen, Tao Du, and Leonidas Guibas. 2015. Convolutional Wasserstein Distances: Efficient Optimal Transportation on Geometric Domains.” ACM Transactions on Graphics 34 (4): 66:1–11.
Spantini, Alessio, Daniele Bigoni, and Youssef Marzouk. 2017. Inference via Low-Dimensional Couplings.” Journal of Machine Learning Research 19 (66): 2639–709.
Taghvaei, Amirhossein, and Prashant G. Mehta. 2019. An Optimal Transport Formulation of the Ensemble Kalman Filter,” October.
Verdinelli, Isabella, and Larry Wasserman. 2019. Hybrid Wasserstein Distance and Fast Distribution Clustering.” Electronic Journal of Statistics 13 (2): 5088–5119.
Wang, Prince Zizhuang, and William Yang Wang. 2019. Riemannian Normalizing Flow on Variational Wasserstein Autoencoder for Text Modeling.” In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 284–94. Minneapolis, Minnesota: Association for Computational Linguistics.
Zhang, Rui, Christian Walder, Edwin V. Bonilla, Marian-Andrei Rizoiu, and Lexing Xie. 2020. Quantile Propagation for Wasserstein-Approximate Gaussian Processes.” In Proceedings of NeurIPS 2020.
Zhu, B., J. Jiao, and D. Tse. 2020. Deconstructing Generative Adversarial Networks.” IEEE Transactions on Information Theory 66 (11): 7155–79.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.