Inference without KL divergence



Placeholder. Various links on inference by minimising some other divergence than the Kullback Leibler divergence.

As mentioned in likelihood-free inference, this is especially interesting in the case of Bayesian inference, or more generally, distributional inference, complications ensue.

(Chu, Blanchet, and Glynn 2019):

in many fields, the object of interest is a probability distribution; moreover, the learning process is guided by a probability functional to be minimized, a loss function that conceptually maps a probability distribution to a real number […] Because the optimization now takes place in the infinite- dimensional space of probability measures, standard finite-dimensional algorithms like gradient descent are initially unavailable; even the proper notion for the derivative of these functionals is unclear. We call upon on a body of literature known as von Mises calculus, originally developed in the field of asymptotic statistics, to make these functional derivatives precise. Remarkably, we find that once the connection is made, the resulting generalized descent algorithm, which we call probability functional descent, is intimately compatible with standard deep learning techniques such as stochastic gradient descent, the reparameterization trick, and adversarial training.

Generalized Bayesian computation

See generalized Bayesian computation

References

Ambrogioni, Luca, Umut GΓΌΓ§lΓΌ, Yagmur GΓΌΓ§lΓΌtΓΌrk, Max Hinne, Eric Maris, and Marcel A. J. van Gerven. 2018. β€œWasserstein Variational Inference.” In Proceedings of the 32Nd International Conference on Neural Information Processing Systems, 2478–87. NIPS’18. USA: Curran Associates Inc.
Arjovsky, Martin, Soumith Chintala, and LΓ©on Bottou. 2017. β€œWasserstein Generative Adversarial Networks.” In International Conference on Machine Learning, 214–23.
Beran, Rudolf. 1977. β€œMinimum Hellinger Distance Estimates for Parametric Models.” The Annals of Statistics 5 (3): 445–63.
Bissiri, P. G., C. C. Holmes, and S. G. Walker. 2016. β€œA General Framework for Updating Belief Distributions.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 78 (5): 1103–30.
Blanchet, Jose, Yang Kang, and Karthyek Murthy. 2016. β€œRobust Wasserstein Profile Inference and Applications to Machine Learning.” arXiv:1610.05627 [Math, Stat], October.
Blanchet, Jose, Yang Kang, Fan Zhang, and Karthyek Murthy. 2017. β€œData-Driven Optimal Cost Selection for Distributionally Robust Optimization.” arXiv:1705.07152 [Stat], May.
Blanchet, Jose, Karthyek Murthy, and Fan Zhang. 2018. β€œOptimal Transport Based Distributionally Robust Optimization: Structural Properties and Iterative Schemes.” arXiv:1810.02403 [Math], October.
Block, Per, Marion Hoffman, Isabel J. Raabe, Jennifer Beam Dowd, Charles Rahal, Ridhi Kashyap, and Melinda C. Mills. 2020. β€œSocial Network-Based Distancing Strategies to Flatten the COVID 19 Curve in a Post-Lockdown World.” arXiv:2004.07052 [Physics, q-Bio, Stat], April.
Campbell, Trevor, and Tamara Broderick. 2017. β€œAutomated Scalable Bayesian Inference via Hilbert Coresets.” arXiv:1710.05053 [Cs, Stat], October.
Chen, Xinshi, Hanjun Dai, and Le Song. 2019. β€œMeta Particle Flow for Sequential Bayesian Inference.” arXiv:1902.00640 [Cs, Stat], February.
Chu, Casey, Jose Blanchet, and Peter Glynn. 2019. β€œProbability Functional Descent: A Unifying Perspective on GANs, Variational Inference, and Reinforcement Learning.” In ICML.
Dellaporta, Charita, Jeremias Knoblauch, Theodoros Damoulas, and FranΓ§ois-Xavier Briol. 2022. β€œRobust Bayesian Inference for Simulator-Based Models via the MMD Posterior Bootstrap.” arXiv:2202.04744 [Cs, Stat], February.
Fernholz, Luisa Turrin. 1983. von Mises calculus for statistical functionals. Lecture Notes in Statistics 19. New York: Springer.
β€”β€”β€”. 2014. β€œStatistical Functionals.” In Wiley StatsRef: Statistics Reference Online. American Cancer Society.
Fong, Edwin, Simon Lyddon, and Chris Holmes. 2019. β€œScalable Nonparametric Sampling from Multimodal Posteriors with the Posterior Bootstrap.” arXiv:1902.03175 [Cs, Stat], August.
Frogner, Charlie, Chiyuan Zhang, Hossein Mobahi, Mauricio Araya, and Tomaso A Poggio. 2015. β€œLearning with a Wasserstein Loss.” In Advances in Neural Information Processing Systems 28, edited by C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, 2053–61. Curran Associates, Inc.
Gao, Rui, and Anton J. Kleywegt. 2022. β€œDistributionally Robust Stochastic Optimization with Wasserstein Distance.” arXiv.
Gibbs, Alison L., and Francis Edward Su. 2002. β€œOn Choosing and Bounding Probability Metrics.” International Statistical Review 70 (3): 419–35.
Gulrajani, Ishaan, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. 2017. β€œImproved Training of Wasserstein GANs.” arXiv:1704.00028 [Cs, Stat], March.
Guo, Xin, Johnny Hong, Tianyi Lin, and Nan Yang. 2017. β€œRelaxed Wasserstein with Applications to GANs.” arXiv:1705.07164 [Cs, Stat], May.
Liu, Huidong, Xianfeng Gu, and Dimitris Samaras. 2018. β€œA Two-Step Computation of the Exact GAN Wasserstein Distance.” In International Conference on Machine Learning, 3159–68.
Liu, Qiang, Jason D Lee, and Michael Jordan. 2016. β€œA Kernelized Stein Discrepancy for Goodness-of-Fit Tests.” In Proceedings of The 33rd International Conference on Machine Learning, 9.
Lyddon, Simon, Stephen Walker, and Chris Holmes. 2018. β€œNonparametric Learning from Bayesian Models with Randomized Objective Functions.” In Proceedings of the 32nd International Conference on Neural Information Processing Systems, 2075–85. NIPS’18. Red Hook, NY, USA: Curran Associates Inc.
Mahdian, Saied, Jose Blanchet, and Peter Glynn. 2019. β€œOptimal Transport Relaxations with Application to Wasserstein GANs.” arXiv:1906.03317 [Cs, Math, Stat], June.
Matsubara, Takuo, Jeremias Knoblauch, FranΓ§ois-Xavier Briol, and Chris J. Oates. 2021. β€œRobust Generalised Bayesian Inference for Intractable Likelihoods.” arXiv:2104.07359 [Math, Stat], April.
MoosmΓΌller, Caroline, and Alexander Cloninger. 2021. β€œLinear Optimal Transport Embedding: Provable Wasserstein Classification for Certain Rigid Transformations and Perturbations.” arXiv:2008.09165 [Cs, Math, Stat], May.
Ostrovski, Georg, Will Dabney, and Remi Munos. n.d. β€œAutoregressive Quantile Networks for Generative Modeling,” 10.
Pacchiardi, Lorenzo, and Ritabrata Dutta. 2022. β€œGeneralized Bayesian Likelihood-Free Inference Using Scoring Rules Estimators.” arXiv:2104.03889 [Stat], March.
Panaretos, Victor M., and Yoav Zemel. 2019. β€œStatistical Aspects of Wasserstein Distances.” Annual Review of Statistics and Its Application 6 (1): 405–31.
Ranganath, Rajesh, Dustin Tran, Jaan Altosaar, and David Blei. 2016. β€œOperator Variational Inference.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 496–504. Curran Associates, Inc.
Rustamov, Raif M. 2021. β€œClosed-Form Expressions for Maximum Mean Discrepancy with Applications to Wasserstein Auto-Encoders.” Stat 10 (1): e329.
Santambrogio, Filippo. 2015. Optimal Transport for Applied Mathematicians. Edited by Filippo Santambrogio. Progress in Nonlinear Differential Equations and Their Applications. Cham: Springer International Publishing.
Schmon, Sebastian M., Patrick W. Cannon, and Jeremias Knoblauch. 2021. β€œGeneralized Posteriors in Approximate Bayesian Computation.” arXiv:2011.08644 [Stat], February.
Solomon, Justin, Fernando de Goes, Gabriel PeyrΓ©, Marco Cuturi, Adrian Butscher, Andy Nguyen, Tao Du, and Leonidas Guibas. 2015. β€œConvolutional Wasserstein Distances: Efficient Optimal Transportation on Geometric Domains.” ACM Transactions on Graphics 34 (4): 66:1–11.
Wang, Prince Zizhuang, and William Yang Wang. 2019. β€œRiemannian Normalizing Flow on Variational Wasserstein Autoencoder for Text Modeling.” In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 284–94. Minneapolis, Minnesota: Association for Computational Linguistics.
Zhang, Rui, Christian Walder, Edwin V. Bonilla, Marian-Andrei Rizoiu, and Lexing Xie. 2020. β€œQuantile Propagation for Wasserstein-Approximate Gaussian Processes.” In Proceedings of NeurIPS 2020.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.