Inference without KL divergence

Placeholder. Various links on inference by minimising some other divergence than the Kullback Leibler divergence.

As mentioned in likelihood-free inference, this is especially interesting in the case of Bayesian inference, or more generally, distributional inference, complications ensue.

(Chu, Blanchet, and Glynn 2019):

in many fields, the object of interest is a probability distribution; moreover, the learning process is guided by a probability functional to be minimized, a loss function that conceptually maps a probability distribution to a real number […] Because the optimization now takes place in the infinite- dimensional space of probability measures, standard finite-dimensional algorithms like gradient descent are initially unavailable; even the proper notion for the derivative of these functionals is unclear. We call upon on a body of literature known as von Mises calculus, originally developed in the field of asymptotic statistics, to make these functional derivatives precise. Remarkably, we find that once the connection is made, the resulting generalized descent algorithm, which we call probability functional descent, is intimately compatible with standard deep learning techniques such as stochastic gradient descent, the reparameterization trick, and adversarial training.


Ambrogioni, Luca, Umut Güçlü, Yagmur Güçlütürk, Max Hinne, Eric Maris, and Marcel A. J. van Gerven. 2018. “Wasserstein Variational Inference.” In Proceedings of the 32Nd International Conference on Neural Information Processing Systems, 2478–87. NIPS’18. USA: Curran Associates Inc.
Arjovsky, Martin, Soumith Chintala, and Léon Bottou. 2017. “Wasserstein Generative Adversarial Networks.” In International Conference on Machine Learning, 214–23.
Beran, Rudolf. 1977. “Minimum Hellinger Distance Estimates for Parametric Models.” The Annals of Statistics 5 (3): 445–63.
Bissiri, P. G., C. C. Holmes, and S. G. Walker. 2016. “A General Framework for Updating Belief Distributions.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 78 (5): 1103–30.
Blanchet, Jose, Yang Kang, and Karthyek Murthy. 2016. “Robust Wasserstein Profile Inference and Applications to Machine Learning.” arXiv:1610.05627 [math, Stat], October.
Blanchet, Jose, Yang Kang, Fan Zhang, and Karthyek Murthy. 2017. “Data-Driven Optimal Cost Selection for Distributionally Robust Optimization.” arXiv:1705.07152 [stat], May.
Blanchet, Jose, Karthyek Murthy, and Fan Zhang. 2018. “Optimal Transport Based Distributionally Robust Optimization: Structural Properties and Iterative Schemes.” arXiv:1810.02403 [math], October.
Block, Per, Marion Hoffman, Isabel J. Raabe, Jennifer Beam Dowd, Charles Rahal, Ridhi Kashyap, and Melinda C. Mills. 2020. “Social Network-Based Distancing Strategies to Flatten the COVID 19 Curve in a Post-Lockdown World.” arXiv:2004.07052 [physics, q-Bio, Stat], April.
Campbell, Trevor, and Tamara Broderick. 2017. “Automated Scalable Bayesian Inference via Hilbert Coresets.” arXiv:1710.05053 [cs, Stat], October.
Chen, Xinshi, Hanjun Dai, and Le Song. 2019. “Meta Particle Flow for Sequential Bayesian Inference.” arXiv:1902.00640 [cs, Stat], February.
Chu, Casey, Jose Blanchet, and Peter Glynn. 2019. “Probability Functional Descent: A Unifying Perspective on GANs, Variational Inference, and Reinforcement Learning.” In ICML.
Fernholz, Luisa Turrin. 1983. von Mises calculus for statistical functionals. Lecture Notes in Statistics 19. New York: Springer.
———. 2014. “Statistical Functionals.” In Wiley StatsRef: Statistics Reference Online. American Cancer Society.
Frogner, Charlie, Chiyuan Zhang, Hossein Mobahi, Mauricio Araya, and Tomaso A Poggio. 2015. “Learning with a Wasserstein Loss.” In Advances in Neural Information Processing Systems 28, edited by C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, 2053–61. Curran Associates, Inc.
Gao, Rui, and Anton J. Kleywegt. 2016. “Distributionally Robust Stochastic Optimization with Wasserstein Distance.” arXiv:1604.02199 [math], April.
Gibbs, Alison L., and Francis Edward Su. 2002. “On Choosing and Bounding Probability Metrics.” International Statistical Review 70 (3): 419–35.
Gulrajani, Ishaan, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. 2017. “Improved Training of Wasserstein GANs.” arXiv:1704.00028 [cs, Stat], March.
Guo, Xin, Johnny Hong, Tianyi Lin, and Nan Yang. 2017. “Relaxed Wasserstein with Applications to GANs.” arXiv:1705.07164 [cs, Stat], May.
Liu, Huidong, Xianfeng Gu, and Dimitris Samaras. 2018. “A Two-Step Computation of the Exact GAN Wasserstein Distance.” In International Conference on Machine Learning, 3159–68.
Liu, Qiang, Jason D. Lee, and Michael I. Jordan. 2016. “A Kernelized Stein Discrepancy for Goodness-of-Fit Tests and Model Evaluation.” arXiv:1602.03253 [stat], July.
Mahdian, Saied, Jose Blanchet, and Peter Glynn. 2019. “Optimal Transport Relaxations with Application to Wasserstein GANs.” arXiv:1906.03317 [cs, Math, Stat], June.
Matsubara, Takuo, Jeremias Knoblauch, François-Xavier Briol, and Chris J. Oates. 2021. “Robust Generalised Bayesian Inference for Intractable Likelihoods,” April.
Moosmüller, Caroline, and Alexander Cloninger. 2021. “Linear Optimal Transport Embedding: Provable Wasserstein Classification for Certain Rigid Transformations and Perturbations.” arXiv:2008.09165 [cs, Math, Stat], May.
Ostrovski, Georg, Will Dabney, and Remi Munos. n.d. “Autoregressive Quantile Networks for Generative Modeling,” 10.
Panaretos, Victor M., and Yoav Zemel. 2019. “Statistical Aspects of Wasserstein Distances.” Annual Review of Statistics and Its Application 6 (1): 405–31.
Ranganath, Rajesh, Dustin Tran, Jaan Altosaar, and David Blei. 2016. “Operator Variational Inference.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 496–504. Curran Associates, Inc.
Rustamov, Raif M. 2019. “Closed-Form Expressions for Maximum Mean Discrepancy with Applications to Wasserstein Auto-Encoders.” arXiv:1901.03227 [cs, Stat], January.
Santambrogio, Filippo. 2015. Optimal Transport for Applied Mathematicians. Edited by Filippo Santambrogio. Progress in Nonlinear Differential Equations and Their Applications. Cham: Springer International Publishing.
Solomon, Justin, Fernando de Goes, Gabriel Peyré, Marco Cuturi, Adrian Butscher, Andy Nguyen, Tao Du, and Leonidas Guibas. 2015. “Convolutional Wasserstein Distances: Efficient Optimal Transportation on Geometric Domains.” ACM Transactions on Graphics 34 (4): 66:1–11.
Wang, Prince Zizhuang, and William Yang Wang. 2019. “Riemannian Normalizing Flow on Variational Wasserstein Autoencoder for Text Modeling.” In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 284–94. Minneapolis, Minnesota: Association for Computational Linguistics.
Zhang, Rui, Christian Walder, Edwin V. Bonilla, Marian-Andrei Rizoiu, and Lexing Xie. 2020. “Quantile Propagation for Wasserstein-Approximate Gaussian Processes.” In Proceedings of NeurIPS 2020.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.