Variational autoencoders

A variational autoencoder uses a limited latent distribution to approximate a complex posterior distribution

A method at the intersection of stochastic variational inference and probabilistic neural nets where we presume that the model is generated by a low-dimensional latent space, which is, if you squint at it, kind of the information bottleneck trick but in a probabilistic setting. To my mind it is a sorta-kinda nonparametric approximate Bayes method.

There is a lot more going on here than I have time to explain, let alone that which I cannot have not even understood for myself.

TBD: connection to reparameterization tricks.

To explore: Relative complexity of these methods e.g. how long does it take to train a variational autoencoder for a given task compared to a similarly expressive GAN?

For now, check out some of the many tutorials, e.g.


Abbasnejad, Ehsan, Anthony Dick, and Anton van den Hengel. 2016. β€œInfinite Variational Autoencoder for Semi-Supervised Learning.” In Advances in Neural Information Processing Systems 29.
Ambrogioni, Luca, Umut GΓΌΓ§lΓΌ, Yagmur GΓΌΓ§lΓΌtΓΌrk, Max Hinne, Eric Maris, and Marcel A. J. van Gerven. 2018. β€œWasserstein Variational Inference.” In Proceedings of the 32Nd International Conference on Neural Information Processing Systems, 2478–87. NIPS’18. USA: Curran Associates Inc.
Arjovsky, Martin, Soumith Chintala, and LΓ©on Bottou. 2017. β€œWasserstein Generative Adversarial Networks.” In International Conference on Machine Learning, 214–23.
Bamler, Robert, and Stephan Mandt. 2017. β€œStructured Black Box Variational Inference for Latent Time Series Models.” arXiv:1707.01069 [Cs, Stat], July.
Berg, Rianne van den, Leonard Hasenclever, Jakub M. Tomczak, and Max Welling. 2018. β€œSylvester Normalizing Flows for Variational Inference.” In Uai18.
Bora, Ashish, Ajil Jalal, Eric Price, and Alexandros G. Dimakis. 2017. β€œCompressed Sensing Using Generative Models.” In International Conference on Machine Learning, 537–46.
Bowman, Samuel R., Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, and Samy Bengio. 2015. β€œGenerating Sentences from a Continuous Space.” arXiv:1511.06349 [Cs], November.
Burda, Yuri, Roger Grosse, and Ruslan Salakhutdinov. 2016. β€œImportance Weighted Autoencoders.” In arXiv:1509.00519 [Cs, Stat].
Caterini, Anthony L., Arnaud Doucet, and Dino Sejdinovic. 2018. β€œHamiltonian Variational Auto-Encoder.” In Advances in Neural Information Processing Systems.
Chen, Tian Qi, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. 2018. β€œNeural Ordinary Differential Equations.” In Advances in Neural Information Processing Systems 31, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, 6572–83. Curran Associates, Inc.
Chen, Xi, Diederik P. Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. 2016. β€œVariational Lossy Autoencoder.” In PRoceedings of ICLR.
Chung, Junyoung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. 2015. β€œA Recurrent Latent Variable Model for Sequential Data.” In Advances in Neural Information Processing Systems 28, edited by C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, 2980–88. Curran Associates, Inc.
Cremer, Chris, Xuechen Li, and David Duvenaud. 2018. β€œInference Suboptimality in Variational Autoencoders.” arXiv:1801.03558 [Cs, Stat], January.
Cutajar, Kurt, Edwin V. Bonilla, Pietro Michiardi, and Maurizio Filippone. 2017. β€œRandom Feature Expansions for Deep Gaussian Processes.” In PMLR.
Dupont, Emilien, Arnaud Doucet, and Yee Whye Teh. 2019. β€œAugmented Neural ODEs.” arXiv:1904.01681 [Cs, Stat], April.
Fabius, Otto, and Joost R. van Amersfoort. 2014. β€œVariational Recurrent Auto-Encoders.” In Proceedings of ICLR.
Garnelo, Marta, Jonathan Schwarz, Dan Rosenbaum, Fabio Viola, Danilo J. Rezende, S. M. Ali Eslami, and Yee Whye Teh. 2018. β€œNeural Processes,” July.
Grathwohl, Will, Ricky T. Q. Chen, Jesse Bettencourt, Ilya Sutskever, and David Duvenaud. 2018. β€œFFJORD: Free-Form Continuous Dynamics for Scalable Reversible Generative Models.” arXiv:1810.01367 [Cs, Stat], October.
He, Junxian, Daniel Spokoyny, Graham Neubig, and Taylor Berg-Kirkpatrick. 2019. β€œLagging Inference Networks and Posterior Collapse in Variational Autoencoders.” In PRoceedings of ICLR.
Hegde, Pashupati, Markus Heinonen, Harri LΓ€hdesmΓ€ki, and Samuel Kaski. 2018. β€œDeep Learning with Differential Gaussian Process Flows.” arXiv:1810.04066 [Cs, Stat], October.
Hoffman, Matthew D, and Matthew J Johnson. 2016. β€œELBO Surgery: Yet Another Way to Carve up the Variational Evidence Lower Bound.” In Advances In Neural Information Processing Systems, 4.
Hsu, Wei-Ning, Yu Zhang, and James Glass. 2017. β€œUnsupervised Learning of Disentangled and Interpretable Representations from Sequential Data.” In arXiv:1709.07902 [Cs, Eess, Stat].
Hu, Zhiting, Zichao Yang, Ruslan Salakhutdinov, and Eric P. Xing. 2018. β€œOn Unifying Deep Generative Models.” In arXiv:1706.00550 [Cs, Stat].
Huang, Chin-Wei, David Krueger, Alexandre Lacoste, and Aaron Courville. 2018. β€œNeural Autoregressive Flows.” arXiv:1804.00779 [Cs, Stat], April.
Husain, Hisham, Richard Nock, and Robert C. Williamson. 2019. β€œA Primal-Dual Link Between GANs and Autoencoders.” In Advances in Neural Information Processing Systems, 32:415–24.
Kim, Yoon, Sam Wiseman, Andrew C. Miller, David Sontag, and Alexander M. Rush. 2018. β€œSemi-Amortized Variational Autoencoders.” arXiv:1802.02550 [Cs, Stat], February.
Kingma, Diederik P. 2017. β€œVariational Inference & Deep Learning: A New Synthesis.”
Kingma, Diederik P., Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. 2016. β€œImproving Variational Inference with Inverse Autoregressive Flow.” In Advances in Neural Information Processing Systems 29. Curran Associates, Inc.
Kingma, Diederik P., Tim Salimans, and Max Welling. 2015. β€œVariational Dropout and the Local Reparameterization Trick.” In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2, 2575–83. NIPS’15. Cambridge, MA, USA: MIT Press.
Kingma, Diederik P., and Max Welling. 2014. β€œAuto-Encoding Variational Bayes.” In ICLR 2014 Conference.
β€”β€”β€”. 2019. An Introduction to Variational Autoencoders. Vol. 12. Foundations and Trends in Machine Learning. Now Publishers, Inc.
Kingma, Durk P, and Prafulla Dhariwal. 2018. β€œGlow: Generative Flow with Invertible 1x1 Convolutions.” In Advances in Neural Information Processing Systems 31, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, 10236–45. Curran Associates, Inc.
Knop, Szymon, PrzemysΕ‚aw Spurek, Jacek Tabor, Igor Podolak, Marcin Mazur, and StanisΕ‚aw JastrzΔ™bski. 2020. β€œCramer-Wold Auto-Encoder.” Journal of Machine Learning Research 21 (164): 1–28.
Larsen, Anders Boesen Lindbo, SΓΈren Kaae SΓΈnderby, Hugo Larochelle, and Ole Winther. 2015. β€œAutoencoding Beyond Pixels Using a Learned Similarity Metric.” arXiv:1512.09300 [Cs, Stat], December.
Lee, Holden, Rong Ge, Tengyu Ma, Andrej Risteski, and Sanjeev Arora. 2017. β€œOn the Ability of Neural Nets to Express Distributions.” In arXiv:1702.07028 [Cs].
Liang, Dawen, Rahul G. Krishnan, Matthew D. Hoffman, and Tony Jebara. 2018. β€œVariational Autoencoders for Collaborative Filtering.” In Proceedings of the 2018 World Wide Web Conference, 689–98. WWW ’18. Republic and Canton of Geneva, CHE: International World Wide Web Conferences Steering Committee.
Louizos, Christos, Uri Shalit, Joris M Mooij, David Sontag, Richard Zemel, and Max Welling. 2017. β€œCausal Effect Inference with Deep Latent-Variable Models.” In Advances in Neural Information Processing Systems 30, edited by I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, 6446–56. Curran Associates, Inc.
Louizos, Christos, and Max Welling. 2017. β€œMultiplicative Normalizing Flows for Variational Bayesian Neural Networks.” In PMLR, 2218–27.
Luo, Yin-Jyun, Kat Agres, and Dorien Herremans. 2019. β€œLearning Disentangled Representations of Timbre and Pitch for Musical Instrument Sounds Using Gaussian Mixture Variational Autoencoders.” In Proceedings of the 20th Conference of the International Society for Music Information Retrieval.
Mathieu, Emile, Tom Rainforth, N. Siddharth, and Yee Whye Teh. 2019. β€œDisentangling Disentanglement in Variational Autoencoders.” In International Conference on Machine Learning, 4402–12. PMLR.
Meent, Jan-Willem van de, Brooks Paige, Hongseok Yang, and Frank Wood. 2021. β€œAn Introduction to Probabilistic Programming.” arXiv:1809.10756 [Cs, Stat], October.
Ng, Ignavier, Zhuangyan Fang, Shengyu Zhu, Zhitang Chen, and Jun Wang. 2020. β€œMasked Gradient-Based Causal Structure Learning.” arXiv:1910.08527 [Cs, Stat], February.
Ng, Ignavier, Shengyu Zhu, Zhitang Chen, and Zhuangyan Fang. 2019. β€œA Graph Autoencoder Approach to Causal Structure Learning.” In Advances In Neural Information Processing Systems.
Papamakarios, George, Iain Murray, and Theo Pavlakou. 2017. β€œMasked Autoregressive Flow for Density Estimation.” In Advances in Neural Information Processing Systems 30, edited by I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, 2338–47. Curran Associates, Inc.
Rakesh, Vineeth, Ruocheng Guo, Raha Moraffah, Nitin Agarwal, and Huan Liu. 2018. β€œLinked Causal Variational Autoencoder for Inferring Paired Spillover Effects.” In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, 1679–82. CIKM ’18. New York, NY, USA: Association for Computing Machinery.
Rezende, Danilo Jimenez, and Shakir Mohamed. 2015. β€œVariational Inference with Normalizing Flows.” In International Conference on Machine Learning, 1530–38. ICML’15. Lille, France:
Rezende, Danilo Jimenez, Shakir Mohamed, and Daan Wierstra. 2015. β€œStochastic Backpropagation and Approximate Inference in Deep Generative Models.” In Proceedings of ICML.
Rippel, Oren, and Ryan Prescott Adams. 2013. β€œHigh-Dimensional Probability Estimation with Deep Density Models.” arXiv:1302.5125 [Cs, Stat], February.
Roberts, Adam, Jesse Engel, Colin Raffel, Curtis Hawthorne, and Douglas Eck. 2018. β€œA Hierarchical Latent Vector Model for Learning Long-Term Structure in Music.” arXiv:1803.05428 [Cs, Eess, Stat], March.
Roeder, Geoffrey, Paul K. Grant, Andrew Phillips, Neil Dalchau, and Edward Meeds. 2019. β€œEfficient Amortised Bayesian Inference for Hierarchical and Nonlinear Dynamical Systems.” arXiv:1905.12090 [Cs, Stat], May.
Ruiz, Francisco J. R., Michalis K. Titsias, and David M. Blei. 2016. β€œThe Generalized Reparameterization Gradient.” In Advances In Neural Information Processing Systems.
Salimans, Tim, Diederik Kingma, and Max Welling. 2015. β€œMarkov Chain Monte Carlo and Variational Inference: Bridging the Gap.” In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), 1218–26. ICML’15. Lille, France:
Spantini, Alessio, Daniele Bigoni, and Youssef Marzouk. 2017. β€œInference via Low-Dimensional Couplings.” Journal of Machine Learning Research 19 (66): 2639–709.
Tait, Daniel J., and Theodoros Damoulas. 2020. β€œVariational Autoencoding of PDE Inverse Problems.” arXiv:2006.15641 [Cs, Stat], June.
Tran, Dustin, Rajesh Ranganath, and David M. Blei. 2015. β€œThe Variational Gaussian Process.” In Proceedings of ICLR.
Ullrich, K. 2020. β€œA Coding Perspective on Deep Latent Variable Models.”
Wang, Prince Zizhuang, and William Yang Wang. 2019. β€œRiemannian Normalizing Flow on Variational Wasserstein Autoencoder for Text Modeling.” In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 284–94. Minneapolis, Minnesota: Association for Computational Linguistics.
Yang, Mengyue, Furui Liu, Zhitang Chen, Xinwei Shen, Jianye Hao, and Jun Wang. 2020. β€œCausalVAE: Disentangled Representation Learning via Neural Structural Causal Models.” arXiv:2004.08697 [Cs, Stat], July.
Zahm, Olivier, Paul Constantine, ClΓ©mentine Prieur, and Youssef Marzouk. 2018. β€œGradient-Based Dimension Reduction of Multivariate Vector-Valued Functions.” arXiv:1801.07922 [Math], January.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.