Variational inference

On fitting the best model one can be bothered to

Inference where we approximate the density of the posterior variationally. That is, we use cunning tricks to turn solve an inference problem by optimising over some parameter set, usually one that allows us to trade off difficulty for fidelity in some useful way.

This idea is not intrinsically Bayesian (i.e. the density we are approximating need not be a posterior density or the marginal likelihood of the evidence), but much of the hot literature on it is from Bayesians doing probabilistic deep learning, so for concreteness I will assume Bayesian uses here.

This is usually mentioned in contrast from the other main method of approximating such densities: sampling from them, usually using Markov Chain Monte Carlo. In practice the two are related (Salimans, Kingma, and Welling 2015) and nowadays frequently used together. (Rezende and Mohamed 2015; Caterini, Doucet, and Sejdinovic 2018)

See also mixture models, probabilistic deep learning, directed graphical models, reparameterization tricks.

Introduction

The classic intro seems to be (Jordan et al. 1999), which considers diverse types of variational calculus applications and inference. Typical ML uses these days are more specific; an archetypal example would be the variational auto-encoder (Kingma and Welling 2014).

Inference via KL divergence

Practically we often want a variational approximation on the marginal (log-)likelihood \(\log p_{\theta}(\mathbf{x})\) for some probabilistic model with observations \(\mathbf{x},\) unobserved latent factors \(\mathbf{x}\) and model parameters \(\mathbb{\theta}.\)

\[\begin{aligned} \log p_{\theta}(\mathbf{x}) &=\log \int p_{\theta}(\mathbf{x} | \mathbf{z}) p(\mathbf{z}) d \mathbf{z} \\ &=\log \int \frac{q_{\phi}(\mathbf{z} | \mathbf{x})}{q_{\phi}(\mathbf{z} | \mathbf{x})} p_{\theta}(\mathbf{x}|\mathbf{z}|) p(\mathbf{z}) d \mathbf{z} \\ &\geq-\mathbb{D}_{KL}\left[q_{\phi}(\mathbf{z} | \mathbf{x}) \| p(\mathbf{z})\right]+\mathbb{E}_{q}\left[\log p_{\theta}(\mathbf{x} | \mathbf{z})\right]\\ &=-\mathcal{F}(\mathbf{x}) \end{aligned}\]

\(\mathcal{F}\) is called the free energy.

Mixture models

Mixture models are classic and for ages, seemed to be the default choice for variational approximation. I do not have much use for these.

Reparameterization trick

See reparameterisation.

Autoencoders

See variational autoencoders?

Loss functions

In which probability metric should one approximate the target density? For tradition and convenience, we usually use KL-loss, but this is not ideal, and alternatives are hot topics.

Ingmar Schuster’s critique of black box loss raises some issues (Ranganath et al. 2016):

It’s called Operator VI as a fancy way to say that one is flexible in constructing how exactly the objective function uses \(\pi, q\) and test functions from some family \(\mathcal{F}\). I completely agree with the motivation: KL-Divergence in the form \(\int q(x) \log \frac{q(x)}{\pi(x)} \mathrm{d}x\) indeed underestimates the variance of \(\pi\) and approximates only one mode. Using KL the other way around, \(\int \pi(x) \log \frac{pi(x)}{q(x)} \mathrm{d}x\) takes all modes into account, but still tends to underestimate variance.

the authors suggest an objective using what they call the Langevin-Stein Operator which does not make use of the proposal density \(q\) at all but uses test functions exclusively.

Abbasnejad, Ehsan, Anthony Dick, and Anton van den Hengel. 2016. “Infinite Variational Autoencoder for Semi-Supervised Learning.” In Advances in Neural Information Processing Systems 29. http://arxiv.org/abs/1611.07800.

Archer, Evan, Il Memming Park, Lars Buesing, John Cunningham, and Liam Paninski. 2015. “Black Box Variational Inference for State Space Models,” November. http://arxiv.org/abs/1511.07367.

Bamler, Robert, and Stephan Mandt. 2017. “Structured Black Box Variational Inference for Latent Time Series Models,” July. http://arxiv.org/abs/1707.01069.

Berg, Rianne van den, Leonard Hasenclever, Jakub M. Tomczak, and Max Welling. 2018. “Sylvester Normalizing Flows for Variational Inference.” In UAI18. http://arxiv.org/abs/1803.05649.

Bishop, Christopher. 1994. “Mixture Density Networks.” Microsoft Research, January. https://www.microsoft.com/en-us/research/publication/mixture-density-networks/.

Blei, David M., Alp Kucukelbir, and Jon D. McAuliffe. 2017. “Variational Inference: A Review for Statisticians.” Journal of the American Statistical Association 112 (518): 859–77. https://doi.org/10.1080/01621459.2017.1285773.

Caterini, Anthony L., Arnaud Doucet, and Dino Sejdinovic. 2018. “Hamiltonian Variational Auto-Encoder.” In Advances in Neural Information Processing Systems. http://arxiv.org/abs/1805.11328.

Chen, Tian Qi, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. 2018. “Neural Ordinary Differential Equations.” In Advances in Neural Information Processing Systems 31, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, 6572–83. Curran Associates, Inc. http://papers.nips.cc/paper/7892-neural-ordinary-differential-equations.pdf.

Chung, Junyoung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. 2015. “A Recurrent Latent Variable Model for Sequential Data.” In Advances in Neural Information Processing Systems 28, edited by C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, 2980–8. Curran Associates, Inc. http://papers.nips.cc/paper/5653-a-recurrent-latent-variable-model-for-sequential-data.pdf.

Cutajar, Kurt, Edwin V. Bonilla, Pietro Michiardi, and Maurizio Filippone. 2017. “Random Feature Expansions for Deep Gaussian Processes.” In PMLR. http://proceedings.mlr.press/v70/cutajar17a.html.

Doerr, Andreas, Christian Daniel, Martin Schiegg, Duy Nguyen-Tuong, Stefan Schaal, Marc Toussaint, and Sebastian Trimpe. 2018. “Probabilistic Recurrent State-Space Models,” January. http://arxiv.org/abs/1801.10395.

Fabius, Otto, and Joost R. van Amersfoort. 2014. “Variational Recurrent Auto-Encoders.” In Proceedings of ICLR. http://arxiv.org/abs/1412.6581.

Flunkert, Valentin, David Salinas, and Jan Gasthaus. 2017. “DeepAR: Probabilistic Forecasting with Autoregressive Recurrent Networks,” April. http://arxiv.org/abs/1704.04110.

Fortunato, Meire, Charles Blundell, and Oriol Vinyals. 2017. “Bayesian Recurrent Neural Networks,” April. http://arxiv.org/abs/1704.02798.

Fraccaro, Marco, Sø ren Kaae Sø nderby, Ulrich Paquet, and Ole Winther. 2016. “Sequential Neural Models with Stochastic Layers.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 2199–2207. Curran Associates, Inc. http://papers.nips.cc/paper/6039-sequential-neural-models-with-stochastic-layers.pdf.

Frey, B.J., and Nebojsa Jojic. 2005. “A Comparison of Algorithms for Inference and Learning in Probabilistic Graphical Models.” IEEE Transactions on Pattern Analysis and Machine Intelligence 27 (9): 1392–1416. https://doi.org/10.1109/TPAMI.2005.169.

Gagen, Michael J, and Kae Nemoto. 2006. “Variational Optimization of Probability Measure Spaces Resolves the Chain Store Paradox.”

Gal, Yarin, and Mark van der Wilk. 2014. “Variational Inference in Sparse Gaussian Process Regression and Latent Variable Models - a Gentle Tutorial,” February. http://arxiv.org/abs/1402.1412.

Giordano, Ryan, Tamara Broderick, and Michael I. Jordan. 2017. “Covariances, Robustness, and Variational Bayes,” September. http://arxiv.org/abs/1709.02536.

Grathwohl, Will, Ricky T. Q. Chen, Jesse Bettencourt, Ilya Sutskever, and David Duvenaud. 2018. “FFJORD: Free-Form Continuous Dynamics for Scalable Reversible Generative Models,” October. http://arxiv.org/abs/1810.01367.

Graves, Alex. 2011. “Practical Variational Inference for Neural Networks.” In Proceedings of the 24th International Conference on Neural Information Processing Systems, 2348–56. NIPS’11. USA: Curran Associates Inc. https://papers.nips.cc/paper/4329-practical-variational-inference-for-neural-networks.pdf.

Gu, Shixiang, Zoubin Ghahramani, and Richard E Turner. 2015. “Neural Adaptive Sequential Monte Carlo.” In Advances in Neural Information Processing Systems 28, edited by C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, 2629–37. Curran Associates, Inc. http://papers.nips.cc/paper/5961-neural-adaptive-sequential-monte-carlo.pdf.

Gulrajani, Ishaan, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. 2017. “Improved Training of Wasserstein GANs,” March. http://arxiv.org/abs/1704.00028.

He, Junxian, Daniel Spokoyny, Graham Neubig, and Taylor Berg-Kirkpatrick. 2019. “Lagging Inference Networks and Posterior Collapse in Variational Autoencoders.” In PRoceedings of ICLR. http://arxiv.org/abs/1901.05534.

Hinton, G. E. 1995. “The Wake-Sleep Algorithm for Unsupervised Neural Networks.” Science 268 (5214): 1558–1161. https://doi.org/10.1126/science.7761831.

Hoffman, Matt, David M. Blei, Chong Wang, and John Paisley. 2013. “Stochastic Variational Inference” 14 (1). http://arxiv.org/abs/1206.7051.

Hoffman, Matthew, and David Blei. 2015. “Stochastic Structured Variational Inference.” In PMLR, 361–69. http://proceedings.mlr.press/v38/hoffman15.html.

Huang, Chin-Wei, David Krueger, Alexandre Lacoste, and Aaron Courville. 2018. “Neural Autoregressive Flows,” April. http://arxiv.org/abs/1804.00779.

Huggins, Jonathan H., Mikołaj Kasprzak, Trevor Campbell, and Tamara Broderick. 2019. “Practical Posterior Error Bounds from Variational Objectives,” October. http://arxiv.org/abs/1910.04102.

Jaakkola, Tommi S., and Michael I. Jordan. 1998. “Improving the Mean Field Approximation via the Use of Mixture Distributions.” In Learning in Graphical Models, 163–73. NATO ASI Series. Springer, Dordrecht. https://doi.org/10.1007/978-94-011-5014-9_6.

Johnson, Matthew J., David Duvenaud, Alexander B. Wiltschko, Sandeep R. Datta, and Ryan P. Adams. 2016. “Composing Graphical Models with Neural Networks for Structured Representations and Fast Inference,” March. http://arxiv.org/abs/1603.06277.

Jordan, Michael I., Zoubin Ghahramani, Tommi S. Jaakkola, and Lawrence K. Saul. 1999. “An Introduction to Variational Methods for Graphical Models.” Machine Learning 37 (2): 183–233. https://doi.org/10.1023/A:1007665907178.

Karl, Maximilian, Maximilian Soelch, Justin Bayer, and Patrick van der Smagt. 2016. “Deep Variational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data.” In Proceedings of ICLR. http://arxiv.org/abs/1605.06432.

Kingma, Diederik P., Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. 2016. “Improving Variational Inference with Inverse Autoregressive Flow.” In Advances in Neural Information Processing Systems 29. Curran Associates, Inc. http://arxiv.org/abs/1606.04934.

Kingma, Diederik P., Tim Salimans, and Max Welling. 2015. “Variational Dropout and the Local Reparameterization Trick,” June. http://arxiv.org/abs/1506.02557.

Kingma, Diederik P., and Max Welling. 2014. “Auto-Encoding Variational Bayes.” In ICLR 2014 Conference. http://arxiv.org/abs/1312.6114.

Kingma, Durk P, and Prafulla Dhariwal. 2018. “Glow: Generative Flow with Invertible 1x1 Convolutions.” In Advances in Neural Information Processing Systems 31, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, 10236–45. Curran Associates, Inc. http://papers.nips.cc/paper/8224-glow-generative-flow-with-invertible-1x1-convolutions.pdf.

Larsen, Anders Boesen Lindbo, Søren Kaae Sønderby, Hugo Larochelle, and Ole Winther. 2015. “Autoencoding Beyond Pixels Using a Learned Similarity Metric,” December. http://arxiv.org/abs/1512.09300.

Liu, Huidong, Xianfeng Gu, and Dimitris Samaras. 2018. “A Two-Step Computation of the Exact GAN Wasserstein Distance.” In International Conference on Machine Learning, 3159–68. http://proceedings.mlr.press/v80/liu18d.html.

Liu, Qiang, and Dilin Wang. 2019. “Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm.” In Advances in Neural Information Processing Systems. http://arxiv.org/abs/1608.04471.

Louizos, Christos, Uri Shalit, Joris M Mooij, David Sontag, Richard Zemel, and Max Welling. 2017. “Causal Effect Inference with Deep Latent-Variable Models.” In Advances in Neural Information Processing Systems 30, edited by I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, 6446–56. Curran Associates, Inc. http://papers.nips.cc/paper/7223-causal-effect-inference-with-deep-latent-variable-models.pdf.

Louizos, Christos, and Max Welling. 2016. “Structured and Efficient Variational Deep Learning with Matrix Gaussian Posteriors.” In arXiv Preprint arXiv:1603.04733, 1708–16. http://arxiv.org/abs/1603.04733.

———. 2017. “Multiplicative Normalizing Flows for Variational Bayesian Neural Networks.” In PMLR, 2218–27. http://proceedings.mlr.press/v70/louizos17a.html.

Luts, Jan. 2015. “Real-Time Semiparametric Regression for Distributed Data Sets.” IEEE Transactions on Knowledge and Data Engineering 27 (2): 545–57. https://doi.org/10.1109/TKDE.2014.2334326.

Luts, J., T. Broderick, and M. P. Wand. 2014. “Real-Time Semiparametric Regression.” Journal of Computational and Graphical Statistics 23 (3): 589–615. https://doi.org/10.1080/10618600.2013.810150.

MacKay, David J C. 2002a. “Gaussian Processes.” In Information Theory, Inference & Learning Algorithms, Chapter 45. Cambridge University Press. http://www.inference.phy.cam.ac.uk/mackay/itprnn/ps/534.548.pdf.

———. 2002b. Information Theory, Inference & Learning Algorithms. Cambridge University Press.

Maddison, Chris J., Dieterich Lawson, George Tucker, Nicolas Heess, Mohammad Norouzi, Andriy Mnih, Arnaud Doucet, and Yee Whye Teh. 2017. “Filtering Variational Objectives.” arXiv Preprint arXiv:1705.09279. https://arxiv.org/abs/1705.09279.

Mahdian, Saied, Jose Blanchet, and Peter Glynn. 2019. “Optimal Transport Relaxations with Application to Wasserstein GANs,” June. https://arxiv.org/abs/1906.03317v1.

Marzouk, Youssef, Tarek Moselhy, Matthew Parno, and Alessio Spantini. 2016. “Sampling via Measure Transport: An Introduction.” In Handbook of Uncertainty Quantification, edited by Roger Ghanem, David Higdon, and Houman Owhadi, 1–41. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-11259-6_23-1.

Minka, Thomas P. 2001. “Expectation Propagation for Approximate Bayesian Inference.” In Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence, 362–69. UAI’01. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. https://dslpitt.org/uai/papers/01/p362-minka.pdf.

Molchanov, Dmitry, Arsenii Ashukha, and Dmitry Vetrov. 2017. “Variational Dropout Sparsifies Deep Neural Networks.” In Proceedings of ICML. http://arxiv.org/abs/1701.05369.

Ormerod, J. T., and M. P. Wand. 2010. “Explaining Variational Approximations.” The American Statistician 64 (2): 140–53. https://doi.org/10.1198/tast.2010.09058.

Papamakarios, George, Iain Murray, and Theo Pavlakou. 2017. “Masked Autoregressive Flow for Density Estimation.” In Advances in Neural Information Processing Systems 30, edited by I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, 2338–47. Curran Associates, Inc. http://papers.nips.cc/paper/6828-masked-autoregressive-flow-for-density-estimation.pdf.

Pereyra, M., P. Schniter, É Chouzenoux, J. C. Pesquet, J. Y. Tourneret, A. O. Hero, and S. McLaughlin. 2016. “A Survey of Stochastic Simulation and Optimization Methods in Signal Processing.” IEEE Journal of Selected Topics in Signal Processing 10 (2): 224–41. https://doi.org/10.1109/JSTSP.2015.2496908.

Ranganath, Rajesh, Dustin Tran, Jaan Altosaar, and David Blei. 2016. “Operator Variational Inference.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 496–504. Curran Associates, Inc. http://papers.nips.cc/paper/6091-operator-variational-inference.pdf.

Ranganath, Rajesh, Dustin Tran, and David Blei. 2016. “Hierarchical Variational Models.” In PMLR, 324–33. http://proceedings.mlr.press/v48/ranganath16.html.

Rezende, Danilo Jimenez, and Shakir Mohamed. 2015. “Variational Inference with Normalizing Flows.” In International Conference on Machine Learning, 1530–8. ICML’15. Lille, France: JMLR.org. http://arxiv.org/abs/1505.05770.

Rezende, Danilo Jimenez, Shakir Mohamed, and Daan Wierstra. 2015. “Stochastic Backpropagation and Approximate Inference in Deep Generative Models.” In Proceedings of ICML. http://arxiv.org/abs/1401.4082.

Ruiz, Francisco J. R., Michalis K. Titsias, and David M. Blei. 2016. “The Generalized Reparameterization Gradient.” In Advances in Neural Information Processing Systems. http://arxiv.org/abs/1610.02287.

Ryder, Thomas, Andrew Golightly, A. Stephen McGough, and Dennis Prangle. 2018. “Black-Box Variational Inference for Stochastic Differential Equations,” February. http://arxiv.org/abs/1802.03335.

Salimans, Tim, Diederik Kingma, and Max Welling. 2015. “Markov Chain Monte Carlo and Variational Inference: Bridging the Gap.” In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), 1218–26. ICML’15. Lille, France: JMLR.org. http://proceedings.mlr.press/v37/salimans15.html.

Spantini, Alessio, Daniele Bigoni, and Youssef Marzouk. 2017. “Inference via Low-Dimensional Couplings.” Journal of Machine Learning Research 19 (66): 2639–2709. http://arxiv.org/abs/1703.06131.

Staines, Joe, and David Barber. 2012. “Variational Optimization,” December. http://arxiv.org/abs/1212.4507.

Titsias, Michalis K., and Miguel Lázaro-Gredilla. 2014. “Doubly Stochastic Variational Bayes for Non-Conjugate Inference.” In Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32, II–1971–II–1980. ICML’14. Beijing, China: JMLR.org. http://proceedings.mlr.press/v32/titsias14.html.

Wainwright, Martin J., and Michael I. Jordan. 2008. Graphical Models, Exponential Families, and Variational Inference. Vol. 1. Foundations and Trends® in Machine Learning. http://www.cs.berkeley.edu/~jordan/papers/wainwright-jordan-fnt.pdf.

Wainwright, M., and M. Jordan. 2005. “A Variational Principle for Graphical Models.” In New Directions in Statistical Signal Processing. Vol. 155. MIT Press. http://metro-natshar-31-71.brain.net.pk/articles/new-directions-in-statistical-signal-processing-from-systems-to-brains-neural-information-processing.9780262083485.28286.pdf#page=166.

Wand, M. P. 2016. “Fast Approximate Inference for Arbitrarily Large Semiparametric Regression Models via Message Passing.” arXiv Preprint arXiv:1602.07412. http://arxiv.org/abs/1602.07412.

Wang, Yixin, and David M. Blei. 2017. “Frequentist Consistency of Variational Bayes,” May. http://arxiv.org/abs/1705.03439.

Wiegerinck, Wim. 2000. “Variational Approximations Between Mean Field Theory and the Junction Tree Algorithm.” In Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence, 626–33. UAI ’00. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. http://arxiv.org/abs/1301.3901.

Winn, John M., and Christopher M. Bishop. 2005. “Variational Message Passing.” In Journal of Machine Learning Research, 661–94. http://johnwinn.org/Publications/papers/VMP2005.pdf.

Xing, Eric P., Michael I. Jordan, and Stuart Russell. 2003. “A Generalized Mean Field Algorithm for Variational Inference in Exponential Families.” In Proceedings of the Nineteenth Conference on Uncertainty in Artificial Intelligence, 583–91. UAI’03. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. http://arxiv.org/abs/1212.2512.

Yao, Yuling, Aki Vehtari, Daniel Simpson, and Andrew Gelman. n.d. “Yes, but Did It Work?: Evaluating Variational Inference,” 18.

Yoshida, Ryo, and Mike West. 2010. “Bayesian Learning in Sparse Graphical Factor Models via Variational Mean-Field Annealing.” Journal of Machine Learning Research 11 (May): 1771–98. http://www.jmlr.org/papers/v11/yoshida10a.html.

Zahm, Olivier, Paul Constantine, Clémentine Prieur, and Youssef Marzouk. 2018. “Gradient-Based Dimension Reduction of Multivariate Vector-Valued Functions,” January. http://arxiv.org/abs/1801.07922.