Generative adversarial networks

The critic providing a gradient update to the generator

Game theory meets learning. Hip, especially in combination with deep learning, because it provides an elegant means of likelihood free inference.

I don’t know anything about it. Something about training two systems together to both generate and classify examples of a phenomenon of interest.

Sanjeev Arora gives a cogent intro. He also suggests a link with learning theory. See also Delving deep into Generative Adversarial Networks, a β€œcurated, quasi-exhaustive list of state-of-the-art publications and resources about Generative Adversarial Networks (GANs) and their applications.”

GANs are famous for generating images, but I am interested in their use in simulating from difficult distributions in general.

Try a spreadsheet interface for exploring GAN latent spaces. See also The GAN Zoo, β€œA list of all named GANs!”

To discover: precise relationship of deep GANS with, e.g. adversarial training in games and bandit problems. Also, why not, let us consider Augustus Odena’s Open Questions about GANs.

Wasserstein GAN

A tasty hack. The Wasserstein GAN paper (Arjovsky, Chintala, and Bottou 2017) made a splash. The argument is that, kinda-sorta if we squint at it, we can understand the GAN as solving an inference problem with respect to Wasserstein loss. The argument has since been made more precise and extended, but for all its flaws the original article has IMO a good insight and a clear explanation of it.

A sample drawn from the distributions of all images of cyclists

I will not summarize WGANs better than the following handy sources so let us read these.

Vincent Hermann presents the Kontorovich-Rubinstein duality trick intuitively.

Connection to other types of regularisation? (Gulrajani et al. 2017; Miyato et al. 2018)


How does this work? There are many papers exploring that. How about these two? Mirza and Osindero (2014);Isola et al. (2017)


I think this requires cycle consistent loss, whatever that is? (J.-Y. Zhu et al. 2017) How is it different to autoencoders? I suppose because it maps between two domains not between a latent and a domain.

GANs as SDEs

Should look into this (L. Yang, Zhang, and Karniadakis 2020; Kidger et al. 2021).

GANs as VAEs

See deep generative models for a unifying framing.

GANs as energy-based models

Che et al. (2020)


Arjovsky, Martin, Soumith Chintala, and LΓ©on Bottou. 2017. β€œWasserstein Generative Adversarial Networks.” In International Conference on Machine Learning, 214–23.
Arora, Sanjeev, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. 2017. β€œGeneralization and Equilibrium in Generative Adversarial Nets (GANs).” arXiv:1703.00573 [Cs], March.
Bahadori, Mohammad Taha, Krzysztof Chalupka, Edward Choi, Robert Chen, Walter F. Stewart, and Jimeng Sun. 2017. β€œNeural Causal Regularization Under the Independence of Mechanisms Assumption.” arXiv:1702.02604 [Cs, Stat], February.
Bao, Gang, Xiaojing Ye, Yaohua Zang, and Haomin Zhou. 2020. β€œNumerical Solution of Inverse Problems by Weak Adversarial Networks.” Inverse Problems 36 (11): 115003.
Blaauw, Merlijn, and Jordi Bonada. 2017. β€œA Neural Parametric Singing Synthesizer.” arXiv:1704.03809 [Cs], April.
Bora, Ashish, Ajil Jalal, Eric Price, and Alexandros G. Dimakis. 2017. β€œCompressed Sensing Using Generative Models.” In International Conference on Machine Learning, 537–46.
Bowman, Samuel R., Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, and Samy Bengio. 2015. β€œGenerating Sentences from a Continuous Space.” arXiv:1511.06349 [Cs], November.
Che, Tong, Ruixiang Zhang, Jascha Sohl-Dickstein, Hugo Larochelle, Liam Paull, Yuan Cao, and Yoshua Bengio. 2020. β€œYour GAN Is Secretly an Energy-Based Model and You Should Use Discriminator Driven Latent Sampling.” arXiv:2003.06060 [Cs, Stat], March.
Chen, Xi, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. 2016. β€œInfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, R. Garnett, and R. Garnett, 2172–80. Curran Associates, Inc.
Chu, Mengyu, Nils Thuerey, Hans-Peter Seidel, Christian Theobalt, and Rhaleb Zayer. 2021. β€œLearning Meaningful Controls for Fluids.” ACM Transactions on Graphics 40 (4): 1–13.
Denton, Emily, Soumith Chintala, Arthur Szlam, and Rob Fergus. 2015. β€œDeep Generative Image Models Using a Laplacian Pyramid of Adversarial Networks.” arXiv:1506.05751 [Cs], June.
Donahue, Chris, Julian McAuley, and Miller Puckette. 2019. β€œAdversarial Audio Synthesis.” In ICLR 2019.
Dosovitskiy, Alexey, Jost Tobias Springenberg, Maxim Tatarchenko, and Thomas Brox. 2014. β€œLearning to Generate Chairs, Tables and Cars with Convolutional Networks.” arXiv:1411.5928 [Cs], November.
Dziugaite, Gintare Karolina, Daniel M. Roy, and Zoubin Ghahramani. 2015. β€œTraining Generative Neural Networks via Maximum Mean Discrepancy Optimization.” In Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence, 258–67. UAI’15. Arlington, Virginia, United States: AUAI Press.
Engel, Jesse, Cinjon Resnick, Adam Roberts, Sander Dieleman, Douglas Eck, Karen Simonyan, and Mohammad Norouzi. 2017. β€œNeural Audio Synthesis of Musical Notes with WaveNet Autoencoders.” In PMLR.
Fraccaro, Marco, SΓΈ ren Kaae SΓΈ nderby, Ulrich Paquet, and Ole Winther. 2016. β€œSequential Neural Models with Stochastic Layers.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 2199–2207. Curran Associates, Inc.
FrΓΌhstΓΌck, Anna, Ibraheem Alhashim, and Peter Wonka. 2019. β€œTileGAN: Synthesis of Large-Scale Non-Homogeneous Textures.” arXiv:1904.12795 [Cs], April.
Gal, Yarin, and Zoubin Ghahramani. 2015. β€œOn Modern Deep Learning and Variational Inference.” In Advances in Approximate Bayesian Inference Workshop, NIPS.
β€”β€”β€”. 2016. β€œBayesian Convolutional Neural Networks with Bernoulli Approximate Variational Inference.” In 4th International Conference on Learning Representations (ICLR) Workshop Track.
Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. 2014. β€œExplaining and Harnessing Adversarial Examples.” arXiv:1412.6572 [Cs, Stat], December.
Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. β€œGenerative Adversarial Nets.” In Advances in Neural Information Processing Systems 27, edited by Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, 2672–80. NIPS’14. Cambridge, MA, USA: Curran Associates, Inc.
Gregor, Karol, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. 2015. β€œDRAW: A Recurrent Neural Network For Image Generation.” arXiv:1502.04623 [Cs], February.
Gulrajani, Ishaan, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. 2017. β€œImproved Training of Wasserstein GANs.” arXiv:1704.00028 [Cs, Stat], March.
He, Kun, Yan Wang, and John Hopcroft. 2016. β€œA Powerful Generative Model Using Random Weights for the Deep Image Representation.” In Advances in Neural Information Processing Systems.
Hinton, Geoffrey E. 2007. β€œLearning Multiple Layers of Representation.” Trends in Cognitive Sciences 11 (10): 428–34.
Husain, Hisham. 2020. β€œDistributional Robustness with IPMs and Links to Regularization and GANs.” arXiv:2006.04349 [Cs, Stat], June.
Husain, Hisham, Richard Nock, and Robert C. Williamson. 2019. β€œA Primal-Dual Link Between GANs and Autoencoders.” In Advances in Neural Information Processing Systems, 32:415–24.
Isola, Phillip, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2017. β€œImage-to-Image Translation with Conditional Adversarial Networks.” In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 5967–76.
Jetchev, Nikolay, Urs Bergmann, and Roland Vollgraf. 2016. β€œTexture Synthesis with Spatial Generative Adversarial Networks.” In Advances in Neural Information Processing Systems 29.
Kidger, Patrick, James Foster, Xuechen Li, and Terry J. Lyons. 2021. β€œNeural SDEs as Infinite-Dimensional GANs.” In Proceedings of the 38th International Conference on Machine Learning, 5453–63. PMLR.
Kodali, Naveen, Jacob Abernethy, James Hays, and Zsolt Kira. 2017. β€œOn Convergence and Stability of GANs.” arXiv:1705.07215 [Cs], December.
Krishnan, Rahul G., Uri Shalit, and David Sontag. 2017. β€œStructured Inference Networks for Nonlinear State Space Models.” In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, 2101–9.
Kulkarni, Tejas D., Will Whitney, Pushmeet Kohli, and Joshua B. Tenenbaum. 2015. β€œDeep Convolutional Inverse Graphics Network.” arXiv:1503.03167 [Cs], March.
Lee, Honglak, Roger Grosse, Rajesh Ranganath, and Andrew Y. Ng. 2009. β€œConvolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations.” In Proceedings of the 26th Annual International Conference on Machine Learning, 609–16. ICML ’09. New York, NY, USA: ACM.
Li, Chun-Liang, Wei-Cheng Chang, Yu Cheng, Yiming Yang, and Barnabas Poczos. 2017. β€œMMD GAN: Towards Deeper Understanding of Moment Matching Network.” In Advances in Neural Information Processing Systems 30, edited by I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, 2203–13. Curran Associates, Inc.
Louizos, Christos, and Max Welling. 2016. β€œStructured and Efficient Variational Deep Learning with Matrix Gaussian Posteriors.” In arXiv Preprint arXiv:1603.04733, 1708–16.
Mirza, Mehdi, and Simon Osindero. 2014. β€œConditional Generative Adversarial Nets.” arXiv:1411.1784 [Cs, Stat], November.
Miyato, Takeru, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. 2018. β€œSpectral Normalization for Generative Adversarial Networks.” In ICLR 2018.
Miyato, Takeru, and Masanori Koyama. 2018. β€œcGANs with Projection Discriminator.” In.
Mnih, Andriy, and Karol Gregor. 2014. β€œNeural Variational Inference and Learning in Belief Networks.” In Proceedings of The 31st International Conference on Machine Learning.
Mohamed, A. r, G. E. Dahl, and G. Hinton. 2012. β€œAcoustic Modeling Using Deep Belief Networks.” IEEE Transactions on Audio, Speech, and Language Processing 20 (1): 14–22.
Oord, AΓ€ron van den, Nal Kalchbrenner, and Koray Kavukcuoglu. 2016. β€œPixel Recurrent Neural Networks.” arXiv:1601.06759 [Cs], January.
Panaretos, Victor M., and Yoav Zemel. 2019. β€œStatistical Aspects of Wasserstein Distances.” Annual Review of Statistics and Its Application 6 (1): 405–31.
Pascual, Santiago, Joan SerrΓ , and Antonio Bonafonte. 2019. β€œTowards Generalized Speech Enhancement with Generative Adversarial Networks.” arXiv:1904.03418 [Cs, Eess], April.
Pfau, David, and Oriol Vinyals. 2016. β€œConnecting Generative Adversarial Networks and Actor-Critic Methods.” arXiv:1610.01945 [Cs, Stat], October.
Poole, Ben, Alexander A. Alemi, Jascha Sohl-Dickstein, and Anelia Angelova. 2016. β€œImproved Generator Objectives for GANs.” In Advances in Neural Information Processing Systems 29.
Qin, Chongli, Yan Wu, Jost Tobias Springenberg, Andy Brock, Jeff Donahue, Timothy Lillicrap, and Pushmeet Kohli. 2020. β€œTraining Generative Adversarial Networks by Solving Ordinary Differential Equations.” In Advances in Neural Information Processing Systems. Vol. 33.
Radford, Alec, Luke Metz, and Soumith Chintala. 2015. β€œUnsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks.” In arXiv:1511.06434 [Cs].
Rezende, Danilo Jimenez, Shakir Mohamed, and Daan Wierstra. 2015. β€œStochastic Backpropagation and Approximate Inference in Deep Generative Models.” In Proceedings of ICML.
Salakhutdinov, Ruslan. 2015. β€œLearning Deep Generative Models.” Annual Review of Statistics and Its Application 2 (1): 361–85.
Sun, Zheng, Jiaqi Liu, Zewang Zhang, Jingwen Chen, Zhao Huo, Ching Hua Lee, and Xiao Zhang. 2016. β€œComposing Music with Grammar Argumented Neural Networks and Note-Level Encoding.” arXiv:1611.05416 [Cs], November.
Sutherland, Dougal J., Hsiao-Yu Tung, Heiko Strathmann, Soumyajit De, Aaditya Ramdas, Alex Smola, and Arthur Gretton. 2017. β€œGenerative Models and Model Criticism via Optimized Maximum Mean Discrepancy.” In Proceedings of ICLR.
Theis, Lucas, and Matthias Bethge. 2015. β€œGenerative Image Modeling Using Spatial LSTMs.” arXiv:1506.03478 [Cs, Stat], June.
Tran, Dustin, Matthew D. Hoffman, Rif A. Saurous, Eugene Brevdo, Kevin Murphy, and David M. Blei. 2017. β€œDeep Probabilistic Programming.” In ICLR.
Wang, Chuang, Hong Hu, and Yue M. Lu. 2019. β€œA Solvable High-Dimensional Model of GAN.” arXiv:1805.08349 [Cond-Mat, Stat], October.
Wu, Yan, Mihaela Rosca, and Timothy Lillicrap. 2019. β€œDeep Compressed Sensing.” In International Conference on Machine Learning, 6850–60.
Yang, Li-Chia, Szu-Yu Chou, and Yi-Hsuan Yang. 2017. β€œMidiNet: A Convolutional Generative Adversarial Network for Symbolic-Domain Music Generation.” In Proceedings of the 18th International Society for Music Information Retrieval Conference (ISMIR’2017), Suzhou, China.
Yang, Liu, Dongkun Zhang, and George Em Karniadakis. 2020. β€œPhysics-Informed Generative Adversarial Networks for Stochastic Differential Equations.” SIAM Journal on Scientific Computing 42 (1): A292–317.
Zang, Yaohua, Gang Bao, Xiaojing Ye, and Haomin Zhou. 2020. β€œWeak Adversarial Networks for High-Dimensional Partial Differential Equations.” Journal of Computational Physics 411 (June): 109409.
Zeng, Qi, Spencer H. Bryngelson, and Florian SchΓ€fer. 2022. β€œCompetitive Physics Informed Networks.” arXiv.
Zhu, B., J. Jiao, and D. Tse. 2020. β€œDeconstructing Generative Adversarial Networks.” IEEE Transactions on Information Theory 66 (11): 7155–79.
Zhu, Jun-Yan, Philipp KrΓ€henbΓΌhl, Eli Shechtman, and Alexei A. Efros. 2016. β€œGenerative Visual Manipulation on the Natural Image Manifold.” In Proceedings of European Conference on Computer Vision.
Zhu, Jun-Yan, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017. β€œUnpaired Image-To-Image Translation Using Cycle-Consistent Adversarial Networks.” In, 2223–32.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.