Generative adversarial networks

October 7, 2016 — December 14, 2020

adversarial
AI
Bregman
game theory
generative
learning
likelihood free
Monte Carlo
optimization
probability
Figure 1: The critic providing a gradient update to the generator

Game theory meets learning. Hip, especially in combination with deep learning, because it provides an elegant means of likelihood free inference.

I don’t know anything about it. Something about training two systems together to both generate and classify examples of a phenomenon of interest.

Sanjeev Arora gives a cogent intro. He also suggests a link with learning theory. See also Delving deep into Generative Adversarial Networks, a “curated, quasi-exhaustive list of state-of-the-art publications and resources about Generative Adversarial Networks (GANs) and their applications.”

GANs are famous for generating images, but I am interested in their use in simulating from difficult distributions in general.

Try a spreadsheet interface for exploring GAN latent spaces. See also The GAN Zoo, “A list of all named GANs!”

To discover: precise relationship of deep GANS with, e.g. adversarial training in games and bandit problems. Also, why not, let us consider Augustus Odena’s Open Questions about GANs.

Figure 2

1 Wasserstein GAN

A tasty hack. The Wasserstein GAN paper (Arjovsky, Chintala, and Bottou 2017) made a splash. The argument is that, kinda-sorta if we squint at it, we can understand the GAN as solving an inference problem with respect to Wasserstein loss. The argument has since been made more precise and extended, but for all its flaws the original article has IMO a good insight and a clear explanation of it.

Figure 3: A sample drawn from the distributions of all images of cyclists

I will not summarize WGANs better than the following handy sources so let us read these.

Vincent Hermann presents the Kontorovich-Rubinstein duality trick intuitively.

Connection to other types of regularisation? (Gulrajani et al. 2017; Miyato et al. 2018)

2 Conditional

How does this work? There are many papers exploring that. How about these two? Mirza and Osindero (2014);Isola et al. (2017)

3 Invertible

I think this requires cycle consistent loss, whatever that is? (J.-Y. Zhu et al. 2017) How is it different to autoencoders? I suppose because it maps between two domains not between a latent and a domain.

Figure 4

4 Spectral normalization

Miyato and Koyama (2018);Miyato et al. (2018)

pfnet-research/sngan_projection: GANs with spectral normalization and projection discriminator

5 GANs as SDEs

Should look into this (L. Yang, Zhang, and Karniadakis 2020; Kidger et al. 2021).

6 GANs as VAEs

See deep generative models for a unifying framing.

7 GANs as energy-based models

Che et al. (2020)

8 Incoming

9 References

Arjovsky, Chintala, and Bottou. 2017. Wasserstein Generative Adversarial Networks.” In International Conference on Machine Learning.
Arora, Ge, Liang, et al. 2017. Generalization and Equilibrium in Generative Adversarial Nets (GANs).” arXiv:1703.00573 [Cs].
Bahadori, Chalupka, Choi, et al. 2017. Neural Causal Regularization Under the Independence of Mechanisms Assumption.” arXiv:1702.02604 [Cs, Stat].
Bao, Ye, Zang, et al. 2020. Numerical Solution of Inverse Problems by Weak Adversarial Networks.” Inverse Problems.
Blaauw, and Bonada. 2017. A Neural Parametric Singing Synthesizer.” arXiv:1704.03809 [Cs].
Bora, Jalal, Price, et al. 2017. Compressed Sensing Using Generative Models.” In International Conference on Machine Learning.
Bowman, Vilnis, Vinyals, et al. 2015. Generating Sentences from a Continuous Space.” arXiv:1511.06349 [Cs].
Che, Li, Jacob, et al. 2017. Mode Regularized Generative Adversarial Networks.”
Chen, Duan, Houthooft, et al. 2016. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets.” In Advances in Neural Information Processing Systems 29.
Che, Zhang, Sohl-Dickstein, et al. 2020. Your GAN Is Secretly an Energy-Based Model and You Should Use Discriminator Driven Latent Sampling.” arXiv:2003.06060 [Cs, Stat].
Chu, Thuerey, Seidel, et al. 2021. Learning Meaningful Controls for Fluids.” ACM Transactions on Graphics.
Denton, Chintala, Szlam, et al. 2015. Deep Generative Image Models Using a Laplacian Pyramid of Adversarial Networks.” arXiv:1506.05751 [Cs].
Donahue, McAuley, and Puckette. 2019. Adversarial Audio Synthesis.” In ICLR 2019.
Dosovitskiy, Springenberg, Tatarchenko, et al. 2014. Learning to Generate Chairs, Tables and Cars with Convolutional Networks.” arXiv:1411.5928 [Cs].
Dziugaite, Roy, and Ghahramani. 2015. Training Generative Neural Networks via Maximum Mean Discrepancy Optimization.” In Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence. UAI’15.
Engel, Resnick, Roberts, et al. 2017. Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders.” In PMLR.
Fraccaro, Sø nderby, Paquet, et al. 2016. Sequential Neural Models with Stochastic Layers.” In Advances in Neural Information Processing Systems 29.
Frühstück, Alhashim, and Wonka. 2019. TileGAN: Synthesis of Large-Scale Non-Homogeneous Textures.” arXiv:1904.12795 [Cs].
Gal, and Ghahramani. 2015. “On Modern Deep Learning and Variational Inference.” In Advances in Approximate Bayesian Inference Workshop, NIPS.
———. 2016. Bayesian Convolutional Neural Networks with Bernoulli Approximate Variational Inference.” In 4th International Conference on Learning Representations (ICLR) Workshop Track.
Goodfellow, Ian, Pouget-Abadie, Mirza, et al. 2014. Generative Adversarial Nets.” In Advances in Neural Information Processing Systems 27. NIPS’14.
Goodfellow, Ian J., Shlens, and Szegedy. 2014. Explaining and Harnessing Adversarial Examples.” arXiv:1412.6572 [Cs, Stat].
Gregor, Danihelka, Graves, et al. 2015. DRAW: A Recurrent Neural Network For Image Generation.” arXiv:1502.04623 [Cs].
Gulrajani, Ahmed, Arjovsky, et al. 2017. Improved Training of Wasserstein GANs.” arXiv:1704.00028 [Cs, Stat].
He, Wang, and Hopcroft. 2016. A Powerful Generative Model Using Random Weights for the Deep Image Representation.” In Advances in Neural Information Processing Systems.
Hinton. 2007. Learning Multiple Layers of Representation.” Trends in Cognitive Sciences.
Husain. 2020. Distributional Robustness with IPMs and Links to Regularization and GANs.” arXiv:2006.04349 [Cs, Stat].
Husain, Nock, and Williamson. 2019. A Primal-Dual Link Between GANs and Autoencoders.” In Advances in Neural Information Processing Systems.
Isola, Zhu, Zhou, et al. 2017. Image-to-Image Translation with Conditional Adversarial Networks.” In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Jetchev, Bergmann, and Vollgraf. 2016. Texture Synthesis with Spatial Generative Adversarial Networks.” In Advances in Neural Information Processing Systems 29.
Kidger, Foster, Li, et al. 2021. Neural SDEs as Infinite-Dimensional GANs.” In Proceedings of the 38th International Conference on Machine Learning.
Kodali, Abernethy, Hays, et al. 2017. On Convergence and Stability of GANs.” arXiv:1705.07215 [Cs].
Krishnan, Shalit, and Sontag. 2017. Structured Inference Networks for Nonlinear State Space Models.” In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence.
Kulkarni, Whitney, Kohli, et al. 2015. Deep Convolutional Inverse Graphics Network.” arXiv:1503.03167 [Cs].
Lee, Grosse, Ranganath, et al. 2009. Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations.” In Proceedings of the 26th Annual International Conference on Machine Learning. ICML ’09.
Li, Chang, Cheng, et al. 2017. MMD GAN: Towards Deeper Understanding of Moment Matching Network.” In Advances in Neural Information Processing Systems 30.
Louizos, and Welling. 2016. Structured and Efficient Variational Deep Learning with Matrix Gaussian Posteriors.” In arXiv Preprint arXiv:1603.04733.
Mirza, and Osindero. 2014. Conditional Generative Adversarial Nets.” arXiv:1411.1784 [Cs, Stat].
Miyato, Kataoka, Koyama, et al. 2018. Spectral Normalization for Generative Adversarial Networks.” In ICLR 2018.
Miyato, and Koyama. 2018. cGANs with Projection Discriminator.” In.
Mnih, and Gregor. 2014. Neural Variational Inference and Learning in Belief Networks.” In Proceedings of The 31st International Conference on Machine Learning.
Mohamed, Dahl, and Hinton. 2012. Acoustic Modeling Using Deep Belief Networks.” IEEE Transactions on Audio, Speech, and Language Processing.
Panaretos, and Zemel. 2019. Statistical Aspects of Wasserstein Distances.” Annual Review of Statistics and Its Application.
Pascual, Serrà, and Bonafonte. 2019. Towards Generalized Speech Enhancement with Generative Adversarial Networks.” arXiv:1904.03418 [Cs, Eess].
Pfau, and Vinyals. 2016. Connecting Generative Adversarial Networks and Actor-Critic Methods.” arXiv:1610.01945 [Cs, Stat].
Poole, Alemi, Sohl-Dickstein, et al. 2016. Improved Generator Objectives for GANs.” In Advances in Neural Information Processing Systems 29.
Qin, Wu, Springenberg, et al. 2020. Training Generative Adversarial Networks by Solving Ordinary Differential Equations.” In Advances in Neural Information Processing Systems.
Radford, Metz, and Chintala. 2015. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks.” In arXiv:1511.06434 [Cs].
Rezende, Mohamed, and Wierstra. 2015. Stochastic Backpropagation and Approximate Inference in Deep Generative Models.” In Proceedings of ICML.
Salakhutdinov. 2015. Learning Deep Generative Models.” Annual Review of Statistics and Its Application.
Salimans, Goodfellow, Zaremba, et al. 2016. Improved Techniques for Training GANs.” In Proceedings of the 30th International Conference on Neural Information Processing Systems. NIPS’16.
Sun, Liu, Zhang, et al. 2016. Composing Music with Grammar Argumented Neural Networks and Note-Level Encoding.” arXiv:1611.05416 [Cs].
Sutherland, Tung, Strathmann, et al. 2017. Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy.” In Proceedings of ICLR.
Theis, and Bethge. 2015. Generative Image Modeling Using Spatial LSTMs.” arXiv:1506.03478 [Cs, Stat].
Tran, Hoffman, Saurous, et al. 2017. Deep Probabilistic Programming.” In ICLR.
van den Oord, Kalchbrenner, and Kavukcuoglu. 2016. Pixel Recurrent Neural Networks.” arXiv:1601.06759 [Cs].
Wang, Hu, and Lu. 2019. A Solvable High-Dimensional Model of GAN.” arXiv:1805.08349 [Cond-Mat, Stat].
Wu, Rosca, and Lillicrap. 2019. Deep Compressed Sensing.” In International Conference on Machine Learning.
Yang, Li-Chia, Chou, and Yang. 2017. MidiNet: A Convolutional Generative Adversarial Network for Symbolic-Domain Music Generation.” In Proceedings of the 18th International Society for Music Information Retrieval Conference (ISMIR’2017), Suzhou, China.
Yang, Liu, Zhang, and Karniadakis. 2020. Physics-Informed Generative Adversarial Networks for Stochastic Differential Equations.” SIAM Journal on Scientific Computing.
Zang, Bao, Ye, et al. 2020. Weak Adversarial Networks for High-Dimensional Partial Differential Equations.” Journal of Computational Physics.
Zeng, Bryngelson, and Schäfer. 2022. Competitive Physics Informed Networks.”
Zhu, B., Jiao, and Tse. 2020. Deconstructing Generative Adversarial Networks.” IEEE Transactions on Information Theory.
Zhu, Jun-Yan, Krähenbühl, Shechtman, et al. 2016. Generative Visual Manipulation on the Natural Image Manifold.” In Proceedings of European Conference on Computer Vision.
Zhu, Jun-Yan, Park, Isola, et al. 2017. Unpaired Image-To-Image Translation Using Cycle-Consistent Adversarial Networks.” In.