Generative adversarial networks

The critic providing a gradient update to the generator

Game theory meets learning. Hip, especially in combination with deep learning, because it provides an elegant means of likelihood free inference.

I don’t know anything about it. Something about training two systems together to both generate and classify examples of a phenomenon of interest.

Sanjeev Arora gives a cogent intro. He also suggests a link with learning theory. See also Delving deep into Generative Adversarial Networks, a “curated, quasi-exhaustive list of state-of-the-art publications and resources about Generative Adversarial Networks (GANs) and their applications.”

GANs are famous for generating images, but I am interested in their use in simulating from difficult distributions in general.

Here is a spreadsheet interface for exploring GAN latent spaces. See also The GAN Zoo, “A list of all named GANs!”

To discover: precise relationship of deep GANS with, e.g. adversarial training in games and bandit problems. Also, why not, let us consider Augustus Odena’s Open Questions about GANs.

Wasserstein GAN

A tasty hack. The Wasserstein GAN paper (Arjovsky, Chintala, and Bottou 2017) made enough of a splash. The argument is that, kinda-sorta if you squint at it you can understand the GAN as solving an inference problem with respect to Wasserstein loss. The argument has since been made more precise and extended, but for all its flaw the orginal article has IMO a good insight and a clear explanation of it.

A sample drawn from the distributions of all images of cyclists

I will not summarize WGANs better than the following handy sources so let us read these.

Vincent Hermann presents the Kontorovich-Rubinstein duality trick intuitively.

Connection to other types of regularisation? (Gulrajani et al. 2017; Miyato et al. 2018)


How does this work? There are many papers exploring that. How about these two? Mirza and Osindero (2014);Isola et al. (2017)


I think this always requires cycle consistent loss, whatever that is? (J.-Y. Zhu et al. 2017) How is it different to autoencoders? I suppose because it maps between two domains not between a latent and a domain.

GANs as SDEs

Should look into this (L. Yang, Zhang, and Karniadakis 2020; Kidger et al. 2020)

GANs as VAEs

See deep generative models for a unifying framing.

GANs as energy-based models

Che et al. (2020)


Arjovsky, Martin, Soumith Chintala, and Léon Bottou. 2017. “Wasserstein Generative Adversarial Networks.” In International Conference on Machine Learning, 214–23.
Arora, Sanjeev, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. 2017. “Generalization and Equilibrium in Generative Adversarial Nets (GANs).” March 1, 2017.
Bahadori, Mohammad Taha, Krzysztof Chalupka, Edward Choi, Robert Chen, Walter F. Stewart, and Jimeng Sun. 2017. “Neural Causal Regularization Under the Independence of Mechanisms Assumption.” February 8, 2017.
Blaauw, Merlijn, and Jordi Bonada. 2017. “A Neural Parametric Singing Synthesizer.” April 12, 2017.
Bora, Ashish, Ajil Jalal, Eric Price, and Alexandros G. Dimakis. 2017. “Compressed Sensing Using Generative Models.” In International Conference on Machine Learning, 537–46.
Bowman, Samuel R., Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, and Samy Bengio. 2015. “Generating Sentences from a Continuous Space.” November 19, 2015.
Che, Tong, Ruixiang Zhang, Jascha Sohl-Dickstein, Hugo Larochelle, Liam Paull, Yuan Cao, and Yoshua Bengio. 2020. “Your GAN Is Secretly an Energy-Based Model and You Should Use Discriminator Driven Latent Sampling.” March 23, 2020.
Chen, Xi, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. 2016. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, R. Garnett, and R. Garnett, 2172–80. Curran Associates, Inc.
Denton, Emily, Soumith Chintala, Arthur Szlam, and Rob Fergus. 2015. “Deep Generative Image Models Using a Laplacian Pyramid of Adversarial Networks.” June 18, 2015.
Donahue, Chris, Julian McAuley, and Miller Puckette. 2019. “Adversarial Audio Synthesis.” In ICLR 2019.
Dosovitskiy, Alexey, Jost Tobias Springenberg, Maxim Tatarchenko, and Thomas Brox. 2014. “Learning to Generate Chairs, Tables and Cars with Convolutional Networks.” November 21, 2014.
Dziugaite, Gintare Karolina, Daniel M. Roy, and Zoubin Ghahramani. 2015. “Training Generative Neural Networks via Maximum Mean Discrepancy Optimization.” In Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence, 258–67. UAI’15. Arlington, Virginia, United States: AUAI Press.
Engel, Jesse, Cinjon Resnick, Adam Roberts, Sander Dieleman, Douglas Eck, Karen Simonyan, and Mohammad Norouzi. 2017. “Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders.” In PMLR.
Fraccaro, Marco, Sø ren Kaae Sø nderby, Ulrich Paquet, and Ole Winther. 2016. “Sequential Neural Models with Stochastic Layers.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 2199–2207. Curran Associates, Inc.
Frühstück, Anna, Ibraheem Alhashim, and Peter Wonka. 2019. TileGAN: Synthesis of Large-Scale Non-Homogeneous Textures.” April 29, 2019.
Gal, Yarin, and Zoubin Ghahramani. 2015. “On Modern Deep Learning and Variational Inference.” In Advances in Approximate Bayesian Inference Workshop, NIPS.
———. 2016. “Bayesian Convolutional Neural Networks with Bernoulli Approximate Variational Inference.” In 4th International Conference on Learning Representations (ICLR) Workshop Track.
Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. 2014. “Explaining and Harnessing Adversarial Examples.” December 19, 2014.
Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. “Generative Adversarial Nets.” In Advances in Neural Information Processing Systems 27, edited by Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, 2672–80. NIPS’14. Cambridge, MA, USA: Curran Associates, Inc.
Gregor, Karol, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. 2015. DRAW: A Recurrent Neural Network For Image Generation.” February 16, 2015.
Gulrajani, Ishaan, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. 2017. “Improved Training of Wasserstein GANs.” March 31, 2017.
He, Kun, Yan Wang, and John Hopcroft. 2016. “A Powerful Generative Model Using Random Weights for the Deep Image Representation.” In Advances in Neural Information Processing Systems.
Hinton, Geoffrey E. 2007. “Learning Multiple Layers of Representation.” Trends in Cognitive Sciences 11 (10): 428–34.
Husain, Hisham. 2020. “Distributional Robustness with IPMs and Links to Regularization and GANs.” June 8, 2020.
Husain, Hisham, Richard Nock, and Robert C. Williamson. 2019. “A Primal-Dual Link Between GANs and Autoencoders.” In Advances in Neural Information Processing Systems, 32:415–24.
Isola, Phillip, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2017. “Image-to-Image Translation with Conditional Adversarial Networks.” In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 5967–76.
Jetchev, Nikolay, Urs Bergmann, and Roland Vollgraf. 2016. “Texture Synthesis with Spatial Generative Adversarial Networks.” In Advances in Neural Information Processing Systems 29.
Kidger, Patrick, James Foster, Xuechen Li, Harald Oberhauser, and Terry Lyons. 2020. “Neural SDEs Made Easy: SDEs Are Infinite-Dimensional GANS.” In Advances In Neural Information Processing Systems, 6.
Kodali, Naveen, Jacob Abernethy, James Hays, and Zsolt Kira. 2017. “On Convergence and Stability of GANs.” December 10, 2017.
Krishnan, Rahul G., Uri Shalit, and David Sontag. 2017. “Structured Inference Networks for Nonlinear State Space Models.” In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, 2101–9.
Kulkarni, Tejas D., Will Whitney, Pushmeet Kohli, and Joshua B. Tenenbaum. 2015. “Deep Convolutional Inverse Graphics Network.” March 11, 2015.
Lee, Honglak, Roger Grosse, Rajesh Ranganath, and Andrew Y. Ng. 2009. “Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations.” In Proceedings of the 26th Annual International Conference on Machine Learning, 609–16. ICML ’09. New York, NY, USA: ACM.
Li, Chun-Liang, Wei-Cheng Chang, Yu Cheng, Yiming Yang, and Barnabas Poczos. 2017. MMD GAN: Towards Deeper Understanding of Moment Matching Network.” In Advances in Neural Information Processing Systems 30, edited by I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, 2203–13. Curran Associates, Inc.
Louizos, Christos, and Max Welling. 2016. “Structured and Efficient Variational Deep Learning with Matrix Gaussian Posteriors.” In, 1708–16.
Mirza, Mehdi, and Simon Osindero. 2014. “Conditional Generative Adversarial Nets.” November 6, 2014.
Miyato, Takeru, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. 2018. “Spectral Normalization for Generative Adversarial Networks.” In ICLR 2018.
Miyato, Takeru, and Masanori Koyama. 2018. cGANs with Projection Discriminator.” In.
Mnih, Andriy, and Karol Gregor. 2014. “Neural Variational Inference and Learning in Belief Networks.” In Proceedings of The 31st International Conference on Machine Learning.
Mohamed, A. r, G. E. Dahl, and G. Hinton. 2012. “Acoustic Modeling Using Deep Belief Networks.” IEEE Transactions on Audio, Speech, and Language Processing 20 (1): 14–22.
Oord, Aäron van den, Nal Kalchbrenner, and Koray Kavukcuoglu. 2016. “Pixel Recurrent Neural Networks.” January 25, 2016.
Panaretos, Victor M., and Yoav Zemel. 2019. “Statistical Aspects of Wasserstein Distances.” Annual Review of Statistics and Its Application 6 (1): 405–31.
Pascual, Santiago, Joan Serrà, and Antonio Bonafonte. 2019. “Towards Generalized Speech Enhancement with Generative Adversarial Networks.” April 6, 2019.
Pfau, David, and Oriol Vinyals. 2016. “Connecting Generative Adversarial Networks and Actor-Critic Methods.” October 6, 2016.
Poole, Ben, Alexander A. Alemi, Jascha Sohl-Dickstein, and Anelia Angelova. 2016. “Improved Generator Objectives for GANs.” In Advances in Neural Information Processing Systems 29.
Qin, Chongli, Yan Wu, Jost Tobias Springenberg, Andy Brock, Jeff Donahue, Timothy Lillicrap, and Pushmeet Kohli. 2020. “Training Generative Adversarial Networks by Solving Ordinary Differential Equations.” In Advances in Neural Information Processing Systems. Vol. 33.
Radford, Alec, Luke Metz, and Soumith Chintala. 2015. “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks.” In.
Rezende, Danilo Jimenez, Shakir Mohamed, and Daan Wierstra. 2015. “Stochastic Backpropagation and Approximate Inference in Deep Generative Models.” In Proceedings of ICML.
Salakhutdinov, Ruslan. 2015. “Learning Deep Generative Models.” Annual Review of Statistics and Its Application 2 (1): 361–85.
Sun, Zheng, Jiaqi Liu, Zewang Zhang, Jingwen Chen, Zhao Huo, Ching Hua Lee, and Xiao Zhang. 2016. “Composing Music with Grammar Argumented Neural Networks and Note-Level Encoding.” November 16, 2016.
Sutherland, Dougal J., Hsiao-Yu Tung, Heiko Strathmann, Soumyajit De, Aaditya Ramdas, Alex Smola, and Arthur Gretton. 2017. “Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy.” In Proceedings of ICLR.
Theis, Lucas, and Matthias Bethge. 2015. “Generative Image Modeling Using Spatial LSTMs.” June 10, 2015.
Tran, Dustin, Matthew D. Hoffman, Rif A. Saurous, Eugene Brevdo, Kevin Murphy, and David M. Blei. 2017. “Deep Probabilistic Programming.” In ICLR.
Wang, Chuang, Hong Hu, and Yue M. Lu. 2019. “A Solvable High-Dimensional Model of GAN.” October 28, 2019.
Wu, Yan, Mihaela Rosca, and Timothy Lillicrap. 2019. “Deep Compressed Sensing.” In International Conference on Machine Learning, 6850–60.
Yang, Li-Chia, Szu-Yu Chou, and Yi-Hsuan Yang. 2017. MidiNet: A Convolutional Generative Adversarial Network for Symbolic-Domain Music Generation.” In Proceedings of the 18th International Society for Music Information Retrieval Conference (ISMIR’2017), Suzhou, China.
Yang, Liu, Dongkun Zhang, and George Em Karniadakis. 2020. “Physics-Informed Generative Adversarial Networks for Stochastic Differential Equations.” SIAM Journal on Scientific Computing 42 (1): A292–317.
Zang, Yaohua, Gang Bao, Xiaojing Ye, and Haomin Zhou. 2020. “Weak Adversarial Networks for High-Dimensional Partial Differential Equations.” Journal of Computational Physics 411 (June): 109409.
Zhu, B., J. Jiao, and D. Tse. 2020. “Deconstructing Generative Adversarial Networks.” IEEE Transactions on Information Theory 66 (11): 7155–79.
Zhu, Jun-Yan, Philipp Krähenbühl, Eli Shechtman, and Alexei A. Efros. 2016. “Generative Visual Manipulation on the Natural Image Manifold.” In Proceedings of European Conference on Computer Vision.
Zhu, Jun-Yan, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017. “Unpaired Image-To-Image Translation Using Cycle-Consistent Adversarial Networks.” In, 2223–32.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.