Optimal transport inference

I feel the earth mover under my feet, I feel the ψ tumbling down, I feel my heart start to trembling, whenever you’re around (my barycentre)



Doing inference where the probability metric measuring discrepancy between some target distribution and the implied inferential distribution is an optimal-transport one. Frequently intractable, but neat when we can get it. Sometimes we might get there by estimating the (gradients of) an actual OT loss, or even the transport maps implying that loss.

Placeholder/grab bag.

TODO: should we break this into discrete-state and continuous-state cases? Machinery looks different.

NNs

Wasserstein GANs and OT Gans (Salimans et al. 2018) are argued to do an approximate optimal transport inference, indirectly.

Surprise connection to matrix factorisation

Non-negative matrix factorisation via OT is a thing, e.g. in topic modeling (Huynh, Zhao, and Phung 2020; Zhao et al. 2020).

Via Fisher distance

See e.g. (J. H. Huggins et al. 2018b, 2018a) for a particular Bayes posterior approximation using OT and fisher distance.

Minibatched

Daniel Daza in Approximating Wasserstein distances with PyTorch touches upon Fatras et al. (2020):

Optimal transport distances are powerful tools to compare probability distributions and have found many applications in machine learning. Yet their algorithmic complexity prevents their direct use on large scale datasets. To overcome this challenge, practitioners compute these distances on minibatches i.e., they average the outcome of several smaller optimal transport problems. We propose in this paper an analysis of this practice, which effects are not well understood so far. We notably argue that it is equivalent to an implicit regularization of the original problem, with appealing properties such as unbiased estimators, gradients and a concentration bound around the expectation, but also with defects such as loss of distance property.

Linearized embedding

Noted in Bai et al. (2023) via Cheng-Soon Ong:

Comparing K (probability) measures requires the pairwise calculation of transport-based distances, which, despite the significant recent computational speed-ups, remains to be relatively expensive. To address this problem, W. Wang et al. (2013) proposed the Linear Optimal Transport (LOT) framework, which linearizes the 2-Wasserstein distance utilizing its weak Riemannian structure. In short, the probability measures are embedded into the tangent space at a fixed reference measure (e.g., the measures’ Wasserstein barycenter) through a logarithmic map. The Euclidean distances between the embedded measures then approximate the 2-Wasserstein distance between the probability measures. The LOT framework is computationally attractive as it only requires the computation of one optimal transport problem per input measure, reducing the otherwise quadratic cost to linear. Moreover, the framework provides theoretical guarantees on convexifying certain sets of probability measures […], which is critical in supervised and unsupervised learning from sets of probability measures.

Tools

OTT

Optimal Transport Tools (OTT) (Cuturi et al. 2022), a toolbox for all things Wasserstein (documentation):

The goal of OTT is to provide sturdy, versatile and efficient optimal transport solvers, taking advantage of JAX features, such as JIT, auto-vectorization and implicit differentiation.

A typical OT problem has two ingredients: a pair of weight vectors a and b (one for each measure), with a ground cost matrix that is either directly given, or derived as the pairwise evaluation of a cost function on pairs of points taken from two measures. The main design choice in OTT comes from encapsulating the cost in a Geometry object, and [bundling] it with a few useful operations (notably kernel applications). The most common geometry is that of two clouds of vectors compared with the squared Euclidean distance, as illustrated in the example below:

import jax
import jax.numpy as jnp
from ott.tools import transport
# Samples two point clouds and their weights.
rngs = jax.random.split(jax.random.PRNGKey(0),4)
n, m, d = 12, 14, 2
x = jax.random.normal(rngs[0], (n,d)) + 1
y = jax.random.uniform(rngs[1], (m,d))
a = jax.random.uniform(rngs[2], (n,))
b = jax.random.uniform(rngs[3], (m,))
a, b = a / jnp.sum(a), b / jnp.sum(b)
# Computes the couplings via Sinkhorn algorithm.
ot = transport.solve(x, y, a=a, b=b)
P = ot.matrix

The call to sinkhorn above works out the optimal transport solution by storing its output. The transport matrix can be instantiated using those optimal solutions and the Geometry again. That transport matrix links each point from the first point cloud to one or more points from the second, as illustrated below.

To be more precise, the sinkhorn algorithm operates on the Geometry, taking into account weights a and b, to solve the OT problem, produce a named tuple that contains two optimal dual potentials f and g (vectors of the same size as a and b), the objective reg_ot_cost and a log of the errors of the algorithm as it converges, and a converged flag.

obtained coupling from Cuturi et al. (2022)

POT

POT: Python Optimal Transport (Rémi Flamary et al. 2021)

This open source Python library provide several solvers for optimization problems related to Optimal Transport for signal, image processing and machine learning.

Website and documentation: https://PythonOT.github.io/

Source Code (MIT): https://github.com/PythonOT/POT

POT provides the following generic OT solvers (links to examples):

POT provides the following Machine Learning related solvers:

Some other examples are available in the documentation.

GeomLoss

The GeomLoss library provides efficient GPU implementations for:

It is hosted on GitHub and distributed under the permissive MIT license.

GeomLoss functions are available through the custom PyTorch layers SamplesLoss, ImagesLoss and VolumesLoss which allow you to work with weighted point clouds (of any dimension), density maps and volumetric segmentation masks.

Incoming

References

Agueh, Martial, and Guillaume Carlier. 2011. Barycenters in the Wasserstein Space.” SIAM Journal on Mathematical Analysis 43 (2): 904–24.
Alaya, Mokhtar Z., Maxime Berar, Gilles Gasso, and Alain Rakotomamonjy. 2019. Screening Sinkhorn Algorithm for Regularized Optimal Transport.” Advances in Neural Information Processing Systems 32.
Altschuler, Jason, Jonathan Niles-Weed, and Philippe Rigollet. 2017. “Near-Linear Time Approximation Algorithms for Optimal Transport via Sinkhorn Iteration.” In, 11.
Ambrogioni, Luca, Umut Güçlü, Yagmur Güçlütürk, Max Hinne, Eric Maris, and Marcel A. J. van Gerven. 2018. Wasserstein Variational Inference.” In Proceedings of the 32Nd International Conference on Neural Information Processing Systems, 2478–87. NIPS’18. USA: Curran Associates Inc.
Ambrosio, Luigi, Nicola Gigli, and Giuseppe Savare. 2008. Gradient Flows: In Metric Spaces and in the Space of Probability Measures. 2nd ed. Lectures in Mathematics. ETH Zürich. Birkhäuser Basel.
Angenent, Sigurd, Steven Haker, and Allen Tannenbaum. 2003. Minimizing Flows for the Monge-Kantorovich Problem.” SIAM Journal on Mathematical Analysis 35 (1): 61–97.
Arjovsky, Martin, Soumith Chintala, and Léon Bottou. 2017. Wasserstein Generative Adversarial Networks.” In International Conference on Machine Learning, 214–23.
Arora, Sanjeev, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. 2017. Generalization and Equilibrium in Generative Adversarial Nets (GANs).” arXiv:1703.00573 [Cs], March.
Bachoc, Francois, Alexandra Suvorikova, David Ginsbourger, Jean-Michel Loubes, and Vladimir Spokoiny. 2019. Gaussian Processes with Multidimensional Distribution Inputs via Optimal Transport and Hilbertian Embedding.” arXiv:1805.00753 [Stat], April.
Bai, Yikun, Ivan Medri, Rocio Diaz Martin, Rana Muhammad Shahroz Khan, and Soheil Kolouri. 2023. Linear Optimal Partial Transport Embedding.”
Benamou, Jean-David. 2021. Optimal Transportation, Modelling and Numerical Simulation.” Acta Numerica 30 (May): 249–325.
Benamou, Jean-David, Guillaume Carlier, Marco Cuturi, Luca Nenna, and Gabriel Peyré. 2014. Iterative Bregman Projections for Regularized Transportation Problems.” arXiv:1412.5154 [Math], December.
Berg, Rianne van den, Leonard Hasenclever, Jakub M. Tomczak, and Max Welling. 2018. Sylvester Normalizing Flows for Variational Inference.” In UAI18.
Bishop, Adrian N., and Arnaud Doucet. 2014. Distributed Nonlinear Consensus in the Space of Probability Measures.” IFAC Proceedings Volumes, 19th IFAC World Congress, 47 (3): 8662–68.
Blanchet, Jose, Lin Chen, and Xun Yu Zhou. 2018. Distributionally Robust Mean-Variance Portfolio Selection with Wasserstein Distances.” arXiv:1802.04885 [Stat], February.
Blanchet, Jose, Arun Jambulapati, Carson Kent, and Aaron Sidford. 2018. Towards Optimal Running Times for Optimal Transport.” arXiv:1810.07717 [Cs], October.
Blanchet, Jose, Yang Kang, and Karthyek Murthy. 2016. Robust Wasserstein Profile Inference and Applications to Machine Learning.” arXiv:1610.05627 [Math, Stat], October.
Blanchet, Jose, Karthyek Murthy, and Nian Si. 2019. Confidence Regions in Wasserstein Distributionally Robust Estimation.” arXiv:1906.01614 [Math, Stat], June.
Blondel, Mathieu, Vivien Seguy, and Antoine Rolet. 2018. Smooth and Sparse Optimal Transport.” In AISTATS 2018.
Boissard, Emmanuel. 2011. Simple Bounds for the Convergence of Empirical and Occupation Measures in 1-Wasserstein Distance.” Electronic Journal of Probability 16 (none).
Bonneel, Nicolas. n.d. “Displacement Interpolation Using Lagrangian Mass Transport,” 11.
Bonnotte, Nicolas. 2012. From Knothe’s Rearrangement to Brenier’s Optimal Transport Map.” arXiv.
Canas, Guillermo D., and Lorenzo Rosasco. 2012. Learning Probability Measures with Respect to Optimal Transport Metrics.” arXiv:1209.1077 [Cs, Stat], September.
Carlier, Guillaume, Marco Cuturi, Brendan Pass, and Carola Schoenlieb. 2017. “Optimal Transport Meets Probability, Statistics and Machine Learning,” 9.
Carlier, Guillaume, Alfred Galichon, and Filippo Santambrogio. 2008. From Knothe’s Transport to Brenier’s Map and a Continuation Method for Optimal Transport.” arXiv.
Cazelles, Elsa, Arnaud Robert, and Felipe Tobar. 2019. The Wasserstein-Fourier Distance for Stationary Time Series.”
Chizat, Lenaic, Gabriel Peyré, Bernhard Schmitzer, and François-Xavier Vialard. 2017. Scaling Algorithms for Unbalanced Transport Problems.” arXiv:1607.05816 [Math], May.
Chu, Casey, Jose Blanchet, and Peter Glynn. 2019. Probability Functional Descent: A Unifying Perspective on GANs, Variational Inference, and Reinforcement Learning.” In ICML.
Corenflos, Adrien, James Thornton, George Deligiannidis, and Arnaud Doucet. 2021. Differentiable Particle Filtering via Entropy-Regularized Optimal Transport.” arXiv:2102.07850 [Cs, Stat], June.
Coscia, Michele. 2020. “Generalized Euclidean Measure to Estimate Network Distances,” 11.
Courty, Nicolas, Rémi Flamary, Devis Tuia, and Alain Rakotomamonjy. 2016. Optimal Transport for Domain Adaptation.” arXiv:1507.00504 [Cs], June.
Cuturi, Marco. 2013. Sinkhorn Distances: Lightspeed Computation of Optimal Transportation Distances.” In Advances in Neural Information Processing Systems 26.
Cuturi, Marco, and Arnaud Doucet. 2014. Fast Computation of Wasserstein Barycenters.” In International Conference on Machine Learning, 685–93. PMLR.
Cuturi, Marco, Laetitia Meng-Papaxanthos, Yingtao Tian, Charlotte Bunne, Geoff Davis, and Olivier Teboul. 2022. Optimal Transport Tools (OTT): A JAX Toolbox for All Things Wasserstein.” arXiv.
Fatras, Kilian, Younes Zine, Rémi Flamary, Remi Gribonval, and Nicolas Courty. 2020. Learning with Minibatch Wasserstein : Asymptotic and Gradient Properties.” In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, 2131–41. PMLR.
Fernholz, Luisa Turrin. 1983. von Mises calculus for statistical functionals. Lecture Notes in Statistics 19. New York: Springer.
Feydy, Jean, Thibault Séjourné, François-Xavier Vialard, Shun-ichi Amari, Alain Trouve, and Gabriel Peyré. 2019. Interpolating Between Optimal Transport and MMD Using Sinkhorn Divergences.” In Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, 2681–90. PMLR.
Flamary, Rémi, Nicolas Courty, Alexandre Gramfort, Mokhtar Z. Alaya, Aurélie Boisbunon, Stanislas Chambon, Laetitia Chapel, et al. 2021. POT: Python Optimal Transport.” Journal of Machine Learning Research 22 (78): 1–8.
Flamary, Rémi, Marco Cuturi, Nicolas Courty, and Alain Rakotomamonjy. 2018. Wasserstein Discriminant Analysis.” Machine Learning 107 (12): 1923–45.
Flamary, Remi, Alain Rakotomamonjy, Nicolas Courty, and Devis Tuia. n.d. “Optimal Transport with Laplacian Regularization,” 10.
Frogner, Charlie, Chiyuan Zhang, Hossein Mobahi, Mauricio Araya, and Tomaso A Poggio. 2015. Learning with a Wasserstein Loss.” In Advances in Neural Information Processing Systems 28, edited by C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, 2053–61. Curran Associates, Inc.
Gao, Rui, and Anton J. Kleywegt. 2022. Distributionally Robust Stochastic Optimization with Wasserstein Distance.” arXiv.
Garbuno-Inigo, Alfredo, Franca Hoffmann, Wuchen Li, and Andrew M. Stuart. 2020. Interacting Langevin Diffusions: Gradient Structure and Ensemble Kalman Sampler.” SIAM Journal on Applied Dynamical Systems 19 (1): 412–41.
Genevay, Aude, Marco Cuturi, Gabriel Peyré, and Francis Bach. 2016. Stochastic Optimization for Large-Scale Optimal Transport.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 3432–40. Curran Associates, Inc.
Genevay, Aude, Gabriel Peyré, and Marco Cuturi. 2017. Learning Generative Models with Sinkhorn Divergences.” arXiv:1706.00292 [Stat], October.
Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets.” In Advances in Neural Information Processing Systems 27, edited by Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, 2672–80. NIPS’14. Cambridge, MA, USA: Curran Associates, Inc.
Gozlan, Nathael, and Christian Léonard. 2010. Transport Inequalities. A Survey.” arXiv:1003.3852 [Math], March.
Gulrajani, Ishaan, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. 2017. Improved Training of Wasserstein GANs.” arXiv:1704.00028 [Cs, Stat], March.
Guo, Xin, Johnny Hong, Tianyi Lin, and Nan Yang. 2017. Relaxed Wasserstein with Applications to GANs.” arXiv:1705.07164 [Cs, Stat], May.
Huggins, Jonathan H., Trevor Campbell, Mikołaj Kasprzak, and Tamara Broderick. 2018a. Scalable Gaussian Process Inference with Finite-Data Mean and Variance Guarantees.” arXiv:1806.10234 [Cs, Stat], June.
———. 2018b. Practical Bounds on the Error of Bayesian Posterior Approximations: A Nonasymptotic Approach.” arXiv:1809.09505 [Cs, Math, Stat], September.
Huggins, Jonathan H., Mikołaj Kasprzak, Trevor Campbell, and Tamara Broderick. 2019. Practical Posterior Error Bounds from Variational Objectives.” arXiv:1910.04102 [Cs, Math, Stat], October.
Huggins, Jonathan, Ryan P Adams, and Tamara Broderick. 2017. PASS-GLM: Polynomial Approximate Sufficient Statistics for Scalable Bayesian GLM Inference.” In Advances in Neural Information Processing Systems 30, edited by I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, 3611–21. Curran Associates, Inc.
Huynh, Viet, He Zhao, and Dinh Phung. 2020. OTLDA: A Geometry-Aware Optimal Transport Approach for Topic Modeling.” In Advances in Neural Information Processing Systems, 33:18573–82. Curran Associates, Inc.
Khan, Gabriel, and Jun Zhang. 2022. When Optimal Transport Meets Information Geometry.” Information Geometry, June.
Kim, Jin W., and Prashant G. Mehta. 2019. An Optimal Control Derivation of Nonlinear Smoothing Equations,” April.
Léonard, Christian. 2014. A Survey of the Schrödinger Problem and Some of Its Connections with Optimal Transport.” Discrete & Continuous Dynamical Systems - A 34 (4): 1533.
Liu, Huidong, Xianfeng Gu, and Dimitris Samaras. 2018. A Two-Step Computation of the Exact GAN Wasserstein Distance.” In International Conference on Machine Learning, 3159–68.
Liu, Qiang, and Dilin Wang. 2019. Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm.” In Advances In Neural Information Processing Systems.
Louizos, Christos, and Max Welling. 2017. Multiplicative Normalizing Flows for Variational Bayesian Neural Networks.” In PMLR, 2218–27.
Magyar, Jared C., and Malcolm S. Sambridge. 2022. The Wasserstein Distance as a Hydrological Objective Function.” Preprint. Catchment hydrology/Mathematical applications.
Mahdian, Saied, Jose Blanchet, and Peter Glynn. 2019. Optimal Transport Relaxations with Application to Wasserstein GANs.” arXiv:1906.03317 [Cs, Math, Stat], June.
Mallasto, Anton, Augusto Gerolin, and Hà Quang Minh. 2021. Entropy-Regularized 2-Wasserstein Distance Between Gaussian Measures.” Information Geometry, August.
Marzouk, Youssef, Tarek Moselhy, Matthew Parno, and Alessio Spantini. 2016. Sampling via Measure Transport: An Introduction.” In Handbook of Uncertainty Quantification, edited by Roger Ghanem, David Higdon, and Houman Owhadi, 1:1–41. Cham: Springer Heidelberg.
Maurya, Abhinav. 2018. “Optimal Transport in Statistical Machine Learning : Selected Review and Some Open Questions.” In.
Minh, Hà Quang. 2022. Finite Sample Approximations of Exact and Entropic Wasserstein Distances Between Covariance Operators and Gaussian Processes.” SIAM/ASA Journal on Uncertainty Quantification, February, 96–124.
Mohajerin Esfahani, Peyman, and Daniel Kuhn. 2018. Data-Driven Distributionally Robust Optimization Using the Wasserstein Metric: Performance Guarantees and Tractable Reformulations.” Mathematical Programming 171 (1): 115–66.
Montavon, Grégoire, Klaus-Robert Müller, and Marco Cuturi. 2016. Wasserstein Training of Restricted Boltzmann Machines.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 3711–19. Curran Associates, Inc.
Ostrovski, Georg, Will Dabney, and Remi Munos. n.d. “Autoregressive Quantile Networks for Generative Modeling,” 10.
Panaretos, Victor M., and Yoav Zemel. 2019. Statistical Aspects of Wasserstein Distances.” Annual Review of Statistics and Its Application 6 (1): 405–31.
Perrot, Michaël, Nicolas Courty, Rémi Flamary, and Amaury Habrard. n.d. “Mapping Estimation for Discrete Optimal Transport,” 9.
Peyré, Gabriel, and Marco Cuturi. 2019. Computational Optimal Transport. Vol. 11.
Peyré, Gabriel, Marco Cuturi, and Justin Solomon. 2016. Gromov-Wasserstein Averaging of Kernel and Distance Matrices.” In International Conference on Machine Learning, 2664–72. PMLR.
Qian, Wei, Bin Hong, Deng Cai, Xiaofei He, and Xuelong Li. 2016. Non-Negative Matrix Factorization with Sinkhorn Distance.” In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, 1960–66. IJCAI’16. New York, New York, USA: AAAI Press.
Redko, Ievgen, Nicolas Courty, Rémi Flamary, and Devis Tuia. 2019. Optimal Transport for Multi-Source Domain Adaptation Under Target Shift.” In The 22nd International Conference on Artificial Intelligence and Statistics, 849–58. PMLR.
Rezende, Danilo Jimenez, and Shakir Mohamed. 2015. Variational Inference with Normalizing Flows.” In International Conference on Machine Learning, 1530–38. ICML’15. Lille, France: JMLR.org.
Rigollet, Philippe, and Jonathan Weed. 2018. Entropic Optimal Transport Is Maximum-Likelihood Deconvolution.” arXiv.
Rustamov, Raif M. 2021. Closed-Form Expressions for Maximum Mean Discrepancy with Applications to Wasserstein Auto-Encoders.” Stat 10 (1): e329.
Salimans, Tim, Han Zhang, Alec Radford, and Dimitris Metaxas. 2018. Improving GANs Using Optimal Transport.” arXiv.
Sambridge, Malcolm, Andrew Jackson, and Andrew P Valentine. 2022. Geophysical Inversion and Optimal Transport.” Geophysical Journal International 231 (1): 172–98.
Santambrogio, Filippo. 2015. Optimal Transport for Applied Mathematicians. Edited by Filippo Santambrogio. Progress in Nonlinear Differential Equations and Their Applications. Cham: Springer International Publishing.
Scetbon, Meyer, Marco Cuturi, and Gabriel Peyré. 2021. Low-Rank Sinkhorn Factorization.” In Proceedings of the 38th International Conference on Machine Learning, 9344–54. PMLR.
Schmitz, Morgan A., Matthieu Heitz, Nicolas Bonneel, Fred Ngolè, David Coeurjolly, Marco Cuturi, Gabriel Peyré, and Jean-Luc Starck. 2018. Wasserstein Dictionary Learning: Optimal Transport-Based Unsupervised Nonlinear Dictionary Learning.” SIAM Journal on Imaging Sciences 11 (1): 643–78.
Schmitzer, Bernhard. 2019. Stabilized Sparse Scaling Algorithms for Entropy Regularized Transport Problems.” arXiv:1610.06519 [Cs, Math], February.
Solomon, Justin, Fernando de Goes, Gabriel Peyré, Marco Cuturi, Adrian Butscher, Andy Nguyen, Tao Du, and Leonidas Guibas. 2015. Convolutional Wasserstein Distances: Efficient Optimal Transportation on Geometric Domains.” ACM Transactions on Graphics 34 (4): 66:1–11.
Spantini, Alessio, Daniele Bigoni, and Youssef Marzouk. 2017. Inference via Low-Dimensional Couplings.” Journal of Machine Learning Research 19 (66): 2639–709.
Taghvaei, Amirhossein, and Prashant G. Mehta. 2021. An Optimal Transport Formulation of the Ensemble Kalman Filter.” IEEE Transactions on Automatic Control 66 (7): 3052–67.
Verdinelli, Isabella, and Larry Wasserman. 2019. Hybrid Wasserstein Distance and Fast Distribution Clustering.” Electronic Journal of Statistics 13 (2): 5088–5119.
Viehmann, Thomas. 2019. Implementation of Batched Sinkhorn Iterations for Entropy-Regularized Wasserstein Loss.” arXiv.
Wang, Prince Zizhuang, and William Yang Wang. 2019. Riemannian Normalizing Flow on Variational Wasserstein Autoencoder for Text Modeling.” In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 284–94. Minneapolis, Minnesota: Association for Computational Linguistics.
Wang, Wei, Dejan Slepčev, Saurav Basu, John A. Ozolek, and Gustavo K. Rohde. 2013. A Linear Optimal Transportation Framework for Quantifying and Visualizing Variations in Sets of Images.” International Journal of Computer Vision 101 (2): 254–69.
Zhang, Rui, Christian Walder, Edwin V. Bonilla, Marian-Andrei Rizoiu, and Lexing Xie. 2020. Quantile Propagation for Wasserstein-Approximate Gaussian Processes.” In Proceedings of NeurIPS 2020.
Zhang, Stephen Y. 2021. A Unified Framework for Non-Negative Matrix and Tensor Factorisations with a Smoothed Wasserstein Loss.” arXiv.
Zhao, He, Dinh Phung, Viet Huynh, Trung Le, and Wray Buntine. 2020. Neural Topic Model via Optimal Transport.” In.
Zhu, B., J. Jiao, and D. Tse. 2020. Deconstructing Generative Adversarial Networks.” IEEE Transactions on Information Theory 66 (11): 7155–79.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.