Deep generative models

December 10, 2020 — November 11, 2021

approximation
Bayes
generative
likelihood free
Monte Carlo
neural nets
optimization
probabilistic algorithms
probability
statistics
unsupervised
Figure 1: Generating a synthetic observation at great depth

Certain famous models in neural nets are generative — informally, they produce samples from some distribution. In training, the distribution of those samples is tweaked until it resembles, in some sense, the distribution of our observed data. There are many attempts now to unify fancy generative techniques such as GANs, VAEs, and neural diffusion into a single unified method, or at least a cordial family of methods, so I had better devise a page for that.

Figure 2: Lilian Weng diagrams some popular generative architectures.

Here I mean generative in the sense that “this model will (approximately) simulate from the true distribution of interest,” which is somewhat weaker than the requirements of, e.g., MC Bayesian inference, where we assume that we can access likelihoods, or at least likelihood gradients. In such a case, we might have no likelihood at all, or variational approximations to likelihood or whatever.

Figure 3: Observations arising from unobserved latent factors

1 Philosophical diversion: probability is a weird abstraction

Tangent: Learning problems involve the composition of differentiating and integrating various terms that measure how well you have approximated the state of the world. Probabilistic neural networks leverage combinations of integrals that we can solve by Monte Carlo, and derivatives that we can solve via automatic differentiation, which are both fast-ish on modern hardware. In cunning combination, these find approximate solutions to some very interesting problems in calculus. Although… There is something odd about that setup. From this perspective, generative models (such as GANs and autoencoders) solve an intractable integral by simulating samples probabilistically from it, in lieu of processing the continuous, unknowable, intractable integral that we actually wish to solve. But that continuous intractable integral was in any case a contrivance, a thought experiment imagining a world populated with such weird Platonic objects as integrals-over-possible-states-of-the-world which only mathematicians would consider reasonable. The world we live in has, as far as I know, no such thing. We do not have a world where the things we observe are stochastic samples from an ineffable probability density, but rather the observations themselves are the phenomena, and the probability density over them is a weird abstraction. It must look deeply odd from the outside when we talk about how we are solving integrals by looking at data, instead of solving data by looking at integrals.

2 Generative flow nets

See this page.

3 References

Adler, and Lunz. 2018. Banach Wasserstein GAN.”
Anderson. 1982. Reverse-Time Diffusion Equation Models.” Stochastic Processes and Their Applications.
Arjovsky, and Bottou. 2017. Towards Principled Methods for Training Generative Adversarial Networks.” arXiv:1701.04862 [Stat].
Arjovsky, Chintala, and Bottou. 2017. Wasserstein Generative Adversarial Networks.” In International Conference on Machine Learning.
Arora, Ge, Liang, et al. 2017. Generalization and Equilibrium in Generative Adversarial Nets (GANs).” arXiv:1703.00573 [Cs].
Arora, Liang, and Ma. 2015. Why Are Deep Nets Reversible: A Simple Theory, with Implications for Training.” arXiv:1511.05653 [Cs].
Bach, He, Ratner, et al. 2017. Learning the Structure of Generative Models Without Labeled Data.” In Proceedings of the 34th International Conference on Machine Learning.
Bahadori, Chalupka, Choi, et al. 2017. Neural Causal Regularization Under the Independence of Mechanisms Assumption.” arXiv:1702.02604 [Cs, Stat].
Baydin, Shao, Bhimji, et al. 2019. Etalumis: Bringing Probabilistic Programming to Scientific Simulators at Scale.” In arXiv:1907.03382 [Cs, Stat].
Bora, Jalal, Price, et al. 2017. Compressed Sensing Using Generative Models.” In International Conference on Machine Learning.
Bowman, Vilnis, Vinyals, et al. 2015. Generating Sentences from a Continuous Space.” arXiv:1511.06349 [Cs].
Burda, Grosse, and Salakhutdinov. 2016. Importance Weighted Autoencoders.” In arXiv:1509.00519 [Cs, Stat].
Caterini, Doucet, and Sejdinovic. 2018. Hamiltonian Variational Auto-Encoder.” In Advances in Neural Information Processing Systems.
Chen, Duan, Houthooft, et al. 2016. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets.” In Advances in Neural Information Processing Systems 29.
Chen, Kingma, Salimans, et al. 2016. Variational Lossy Autoencoder.” In PRoceedings of ICLR.
Dasgupta, Yoshizumi, and Osogami. 2016. Regularized Dynamic Boltzmann Machine with Delay Pruning for Unsupervised Learning of Temporal Sequences.” arXiv:1610.01989 [Cs, Stat].
Denton, Chintala, Szlam, et al. 2015. Deep Generative Image Models Using a Laplacian Pyramid of Adversarial Networks.” arXiv:1506.05751 [Cs].
Dhariwal, and Nichol. 2021. Diffusion Models Beat GANs on Image Synthesis.” arXiv:2105.05233 [Cs, Stat].
Donahue, McAuley, and Puckette. 2019. Adversarial Audio Synthesis.” In ICLR 2019.
Dosovitskiy, Springenberg, Tatarchenko, et al. 2014. Learning to Generate Chairs, Tables and Cars with Convolutional Networks.” arXiv:1411.5928 [Cs].
Dutordoir, Hensman, van der Wilk, et al. 2021. Deep Neural Networks as Point Estimates for Deep Gaussian Processes.” In arXiv:2105.04504 [Cs, Stat].
Dutordoir, Saul, Ghahramani, et al. 2022. Neural Diffusion Processes.”
Dziugaite, Roy, and Ghahramani. 2015. Training Generative Neural Networks via Maximum Mean Discrepancy Optimization.” In Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence. UAI’15.
Engel, Agrawal, Chen, et al. 2019. GANSynth: Adversarial Neural Audio Synthesis.” In Seventh International Conference on Learning Representations.
Engel, Hantrakul, Gu, et al. 2019. DDSP: Differentiable Digital Signal Processing.” In.
Engel, Resnick, Roberts, et al. 2017. Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders.” In PMLR.
Frühstück, Alhashim, and Wonka. 2019. TileGAN: Synthesis of Large-Scale Non-Homogeneous Textures.” arXiv:1904.12795 [Cs].
Gal, and Ghahramani. 2015. “On Modern Deep Learning and Variational Inference.” In Advances in Approximate Bayesian Inference Workshop, NIPS.
Genevay, Peyré, and Cuturi. 2017. Learning Generative Models with Sinkhorn Divergences.” arXiv:1706.00292 [Stat].
Goodfellow, Ian, Pouget-Abadie, Mirza, et al. 2014. Generative Adversarial Nets.” In Advances in Neural Information Processing Systems 27. NIPS’14.
Goodfellow, Ian J., Shlens, and Szegedy. 2014. Explaining and Harnessing Adversarial Examples.” arXiv:1412.6572 [Cs, Stat].
Gulrajani, Ahmed, Arjovsky, et al. 2017. Improved Training of Wasserstein GANs.” arXiv:1704.00028 [Cs, Stat].
Guo, Hong, Lin, et al. 2017. Relaxed Wasserstein with Applications to GANs.” arXiv:1705.07164 [Cs, Stat].
Han, Zheng, and Zhou. 2022. CARD: Classification and Regression Diffusion Models.”
He, Wang, and Hopcroft. 2016. A Powerful Generative Model Using Random Weights for the Deep Image Representation.” In Advances in Neural Information Processing Systems.
Hinton. 2007. Learning Multiple Layers of Representation.” Trends in Cognitive Sciences.
Hoffman, and Johnson. 2016. ELBO Surgery: Yet Another Way to Carve up the Variational Evidence Lower Bound.” In Advances In Neural Information Processing Systems.
Ho, Jain, and Abbeel. 2020. Denoising Diffusion Probabilistic Models.” In Proceedings of the 34th International Conference on Neural Information Processing Systems. NIPS ’20.
Hoogeboom, Gritsenko, Bastings, et al. 2021. Autoregressive Diffusion Models.” arXiv:2110.02037 [Cs, Stat].
Husain. 2020. Distributional Robustness with IPMs and Links to Regularization and GANs.” arXiv:2006.04349 [Cs, Stat].
Husain, Nock, and Williamson. 2019. A Primal-Dual Link Between GANs and Autoencoders.” In Advances in Neural Information Processing Systems.
Huszár. 2015. How (Not) to Train Your Generative Model: Scheduled Sampling, Likelihood, Adversary? arXiv:1511.05101 [Cs, Math, Stat].
———. 2017. Variational Inference Using Implicit Distributions.”
Hu, Yang, Salakhutdinov, et al. 2018. On Unifying Deep Generative Models.” In arXiv:1706.00550 [Cs, Stat].
Hyvärinen. 2005. Estimation of Non-Normalized Statistical Models by Score Matching.” The Journal of Machine Learning Research.
Isola, Zhu, Zhou, et al. 2017. Image-to-Image Translation with Conditional Adversarial Networks.” In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Jalal, Arvinte, Daras, et al. 2021. Robust Compressed Sensing MRI with Deep Generative Priors.” In Advances in Neural Information Processing Systems.
Jayaram, and Thickstun. 2020. Source Separation with Deep Generative Priors.” arXiv:2002.07942 [Cs, Stat].
Jetchev, Bergmann, and Vollgraf. 2016. Texture Synthesis with Spatial Generative Adversarial Networks.” In Advances in Neural Information Processing Systems 29.
Ji, and Liang. 2018. Minimax Estimation of Neural Net Distance.”
Jolicoeur-Martineau, Piché-Taillefer, Mitliagkas, et al. 2022. Adversarial Score Matching and Improved Sampling for Image Generation.” In.
Karaletsos. 2016. Adversarial Message Passing For Graphical Models.”
Karras, Laine, and Aila. 2018. A Style-Based Generator Architecture for Generative Adversarial Networks.” arXiv:1812.04948 [Cs, Stat].
Kim, Wiseman, Miller, et al. 2018. Semi-Amortized Variational Autoencoders.” arXiv:1802.02550 [Cs, Stat].
Kingma, and Dhariwal. 2018. Glow: Generative Flow with Invertible 1x1 Convolutions.” In Advances in Neural Information Processing Systems 31.
Kodali, Abernethy, Hays, et al. 2017. On Convergence and Stability of GANs.” arXiv:1705.07215 [Cs].
Krishnan, Shalit, and Sontag. 2017. Structured Inference Networks for Nonlinear State Space Models.” In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence.
Kulkarni, Whitney, Kohli, et al. 2015. Deep Convolutional Inverse Graphics Network.” arXiv:1503.03167 [Cs].
Lee, Holden, Ge, Ma, et al. 2017. On the Ability of Neural Nets to Express Distributions.” In arXiv:1702.07028 [Cs].
Lee, Honglak, Grosse, Ranganath, et al. 2009. Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations.” In Proceedings of the 26th Annual International Conference on Machine Learning. ICML ’09.
Liang, Krishnan, Hoffman, et al. 2018. Variational Autoencoders for Collaborative Filtering.” In Proceedings of the 2018 World Wide Web Conference. WWW ’18.
Li, Chang, Cheng, et al. 2017. MMD GAN: Towards Deeper Understanding of Moment Matching Network.” In Advances in Neural Information Processing Systems 30.
Liu, Luo, Xu, et al. 2023. GenPhys: From Physical Processes to Generative Models.”
Louizos, and Welling. 2016. Structured and Efficient Variational Deep Learning with Matrix Gaussian Posteriors.” In arXiv Preprint arXiv:1603.04733.
Mirza, and Osindero. 2014. Conditional Generative Adversarial Nets.” arXiv:1411.1784 [Cs, Stat].
Miyato, Kataoka, Koyama, et al. 2018. Spectral Normalization for Generative Adversarial Networks.” In ICLR 2018.
Mnih, and Gregor. 2014. Neural Variational Inference and Learning in Belief Networks.” In Proceedings of The 31st International Conference on Machine Learning. ICML’14.
Mohamed, A. r, Dahl, and Hinton. 2012. Acoustic Modeling Using Deep Belief Networks.” IEEE Transactions on Audio, Speech, and Language Processing.
Mohamed, Shakir, and Lakshminarayanan. 2016. Learning in Implicit Generative Models.”
Mohamed, Shakir, and Rezende. 2015. “Variational Information Maximisation for Intrinsically Motivated Reinforcement Learning.” In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2. NIPS’15.
Nichol, and Dhariwal. 2021. Improved Denoising Diffusion Probabilistic Models.” In Proceedings of the 38th International Conference on Machine Learning.
Panaretos, and Zemel. 2019. Statistical Aspects of Wasserstein Distances.” Annual Review of Statistics and Its Application.
Papamakarios, Nalisnick, Rezende, et al. 2021. Normalizing Flows for Probabilistic Modeling and Inference.” Journal of Machine Learning Research.
Pascual, Serrà, and Bonafonte. 2019. Towards Generalized Speech Enhancement with Generative Adversarial Networks.” arXiv:1904.03418 [Cs, Eess].
Pfau, and Vinyals. 2016. Connecting Generative Adversarial Networks and Actor-Critic Methods.” arXiv:1610.01945 [Cs, Stat].
Poole, Alemi, Sohl-Dickstein, et al. 2016. Improved Generator Objectives for GANs.” In Advances in Neural Information Processing Systems 29.
Prenger, Valle, and Catanzaro. 2018. WaveGlow: A Flow-Based Generative Network for Speech Synthesis.” arXiv:1811.00002 [Cs, Eess, Stat].
Radford, Metz, and Chintala. 2015. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks.” In arXiv:1511.06434 [Cs].
Ramasinghe, Ranasinghe, Khan, et al. 2020. Conditional Generative Modeling via Learning the Latent Space.” In.
Ranganath, Tran, Altosaar, et al. 2016. Operator Variational Inference.” In Advances in Neural Information Processing Systems 29.
Rezende, Mohamed, and Wierstra. 2015. Stochastic Backpropagation and Approximate Inference in Deep Generative Models.” In Proceedings of ICML.
Salakhutdinov. 2015. Learning Deep Generative Models.” Annual Review of Statistics and Its Application.
Salimans, Kingma, and Welling. 2015. Markov Chain Monte Carlo and Variational Inference: Bridging the Gap.” In Proceedings of the 32nd International Conference on Machine Learning (ICML-15). ICML’15.
Sohl-Dickstein, Weiss, Maheswaranathan, et al. 2015. Deep Unsupervised Learning Using Nonequilibrium Thermodynamics.”
Song, Yang, Durkan, Murray, et al. 2021. Maximum Likelihood Training of Score-Based Diffusion Models.” In Advances in Neural Information Processing Systems.
Song, Yang, and Ermon. 2020a. Generative Modeling by Estimating Gradients of the Data Distribution.” In Advances In Neural Information Processing Systems.
———. 2020b. Improved Techniques for Training Score-Based Generative Models.” In Advances In Neural Information Processing Systems.
Song, Yang, Garg, Shi, et al. 2019. Sliced Score Matching: A Scalable Approach to Density and Score Estimation.”
Song, Jiaming, Meng, and Ermon. 2021. Denoising Diffusion Implicit Models.” arXiv:2010.02502 [Cs].
Song, Yang, Shen, Xing, et al. 2022. Solving Inverse Problems in Medical Imaging with Score-Based Generative Models.” In.
Song, Yang, Sohl-Dickstein, Kingma, et al. 2022. Score-Based Generative Modeling Through Stochastic Differential Equations.” In.
Sun, Liu, Zhang, et al. 2016. Composing Music with Grammar Argumented Neural Networks and Note-Level Encoding.” arXiv:1611.05416 [Cs].
Sutherland, Tung, Strathmann, et al. 2017. Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy.” In Proceedings of ICLR.
Swersky, Ranzato, Buchman, et al. 2011. “On Autoencoders and Score Matching for Energy Based Models.” In Proceedings of the 28th International Conference on Machine Learning (ICML-11).
Theis, and Bethge. 2015. Generative Image Modeling Using Spatial LSTMs.” arXiv:1506.03478 [Cs, Stat].
Tran, Hoffman, Saurous, et al. 2017. Deep Probabilistic Programming.” In ICLR.
Tran, Ranganath, and Blei. 2017. Hierarchical Implicit Models and Likelihood-Free Variational Inference.” In Advances in Neural Information Processing Systems 30.
Ullrich. 2020. A Coding Perspective on Deep Latent Variable Models.”
Uppal, Stensbo-Smidt, Boomsma, et al. 2023. Implicit Variational Inference for High-Dimensional Posteriors.”
Oord, Aaron van den, Dieleman, Zen, et al. 2016. WaveNet: A Generative Model for Raw Audio.” In 9th ISCA Speech Synthesis Workshop.
Oord, Aäron van den, Kalchbrenner, and Kavukcuoglu. 2016. Pixel Recurrent Neural Networks.” arXiv:1601.06759 [Cs].
Vincent. 2011. A connection between score matching and denoising autoencoders.” Neural Computation.
Wang, Chuang, Hu, and Lu. 2019. A Solvable High-Dimensional Model of GAN.” arXiv:1805.08349 [Cond-Mat, Stat].
Wang, Prince Zizhuang, and Wang. 2019. Riemannian Normalizing Flow on Variational Wasserstein Autoencoder for Text Modeling.” In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers).
Winkler, Worrall, Hoogeboom, et al. 2023. Learning Likelihoods with Conditional Normalizing Flows.”
Wu, Rosca, and Lillicrap. 2019. Deep Compressed Sensing.” In International Conference on Machine Learning.
Xie, Gao, Nijkamp, et al. 2020. Representation Learning: A Statistical Perspective.” Annual Review of Statistics and Its Application.
Yang, Li-Chia, Chou, and Yang. 2017. MidiNet: A Convolutional Generative Adversarial Network for Symbolic-Domain Music Generation.” In Proceedings of the 18th International Society for Music Information Retrieval Conference (ISMIR’2017), Suzhou, China.
Yang, Mengyue, Liu, Chen, et al. 2020. CausalVAE: Disentangled Representation Learning via Neural Structural Causal Models.” arXiv:2004.08697 [Cs, Stat].
Yang, Liu, Zhang, and Karniadakis. 2020. Physics-Informed Generative Adversarial Networks for Stochastic Differential Equations.” SIAM Journal on Scientific Computing.
Yang, Ling, Zhang, Song, et al. 2023. Diffusion Models: A Comprehensive Survey of Methods and Applications.” ACM Computing Surveys.
Yıldız, Heinonen, and Lähdesmäki. 2019. ODE\(^2\)VAE: Deep Generative Second Order ODEs with Bayesian Neural Networks.” arXiv:1905.10994 [Cs, Stat].
Zhou, Horgan, Kumar, et al. 2018. Voice Conversion with Conditional SampleRNN.” arXiv:1808.08311 [Cs, Eess].
Zhu, B., Jiao, and Tse. 2020. Deconstructing Generative Adversarial Networks.” IEEE Transactions on Information Theory.
Zhu, Jun-Yan, Krähenbühl, Shechtman, et al. 2016. Generative Visual Manipulation on the Natural Image Manifold.” In Proceedings of European Conference on Computer Vision.