Deep generative models



Generating a synthetic observation at great depth

Certain famous models in neural nets are generative β€” informally, they produce samples some distribution, in training the distribution of those samples is tweaked until its distribution resembles, in some sense, the distribution of our observed data. There are many attempts now to unify fancy generative techniques such as GANs and VAEs and neural diffusiong into a single unified method, or at least a cordial family of methods, so I had better devise a page for that.

Lilian Weng diagrams some popular generative architectures.

Here I mean generative in the sense that β€œthis model will (approximately) simulate from the true distribution of interest”, which is somewhat weaker that the requirements of, e.g., MC Bayesian inference, where we assume that we can access likelihoods, or at least likelihood gradients. In such a case, we might have no likelihood at all, or variational approximations to likelihood or whatever.

Observations arising from unobserved latent factors

Philosophical diversion: probability is a weird abstraction

Tangent: Learning problems involve composition of differentiating and integrating various terms that measure various properties of how well you have approximated the state of the world. Probabilistic neural networks leverage combinations of integrals that we can solve by Monte Carlo, and derivatives that we can solve via automatic differentiation, which are both fast-ish on modern hardware In cunning combination these find approximate solutions to some very interesting problems in calculus. Although… There is something odd about that setup. From this perspective the generative models (such as GANs and autoencoders) solve an intractable integral by simulating samples probabilistically from it, in lieu of processing the continuous, unknowable, intractable integral that we actually wish to solve. But that continuous intractable integral was in any case a contrivance, a thought experiment imagining a world populated with such weird Platonic objects as integrals-over-possible-states-of-the-world which only mathematicians would consider reasonable. The world we live in has, as far as I know, no such thing. We do not have a world where the things we observe are stochastic samples from an ineffable probability density, but rather the observations themselves are the phenomena, and the probability density over them is a weird abstraction. It must look deeply odd from the outside when we to talk about how we are solving integrals by looking at data, instead of solving data by looking at integrals.

Generative flow nets

See this page.

References

Adler, Jonas, and Sebastian Lunz. 2018. β€œBanach Wasserstein GAN,” June.
Anderson, Brian D. O. 1982. β€œReverse-Time Diffusion Equation Models.” Stochastic Processes and Their Applications 12 (3): 313–26.
Arjovsky, Martin, and LΓ©on Bottou. 2017. β€œTowards Principled Methods for Training Generative Adversarial Networks.” arXiv:1701.04862 [Stat], January.
Arjovsky, Martin, Soumith Chintala, and LΓ©on Bottou. 2017. β€œWasserstein Generative Adversarial Networks.” In International Conference on Machine Learning, 214–23.
Arora, Sanjeev, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. 2017. β€œGeneralization and Equilibrium in Generative Adversarial Nets (GANs).” arXiv:1703.00573 [Cs], March.
Arora, Sanjeev, Yingyu Liang, and Tengyu Ma. 2015. β€œWhy Are Deep Nets Reversible: A Simple Theory, with Implications for Training.” arXiv:1511.05653 [Cs], November.
Bach, Stephen H., Bryan He, Alexander Ratner, and Christopher RΓ©. 2017. β€œLearning the Structure of Generative Models Without Labeled Data.” In Proceedings of the 34th International Conference on Machine Learning. International Conference on Machine Learning, Sydney, Australia.
Bahadori, Mohammad Taha, Krzysztof Chalupka, Edward Choi, Robert Chen, Walter F. Stewart, and Jimeng Sun. 2017. β€œNeural Causal Regularization Under the Independence of Mechanisms Assumption.” arXiv:1702.02604 [Cs, Stat], February.
Baydin, AtΔ±lΔ±m GΓΌneş, Lei Shao, Wahid Bhimji, Lukas Heinrich, Lawrence Meadows, Jialin Liu, Andreas Munk, et al. 2019. β€œEtalumis: Bringing Probabilistic Programming to Scientific Simulators at Scale.” In arXiv:1907.03382 [Cs, Stat].
Bora, Ashish, Ajil Jalal, Eric Price, and Alexandros G. Dimakis. 2017. β€œCompressed Sensing Using Generative Models.” In International Conference on Machine Learning, 537–46.
Bowman, Samuel R., Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, and Samy Bengio. 2015. β€œGenerating Sentences from a Continuous Space.” arXiv:1511.06349 [Cs], November.
Burda, Yuri, Roger Grosse, and Ruslan Salakhutdinov. 2016. β€œImportance Weighted Autoencoders.” In arXiv:1509.00519 [Cs, Stat].
Caterini, Anthony L., Arnaud Doucet, and Dino Sejdinovic. 2018. β€œHamiltonian Variational Auto-Encoder.” In Advances in Neural Information Processing Systems.
Chen, Xi, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. 2016. β€œInfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, R. Garnett, and R. Garnett, 2172–80. Curran Associates, Inc.
Chen, Xi, Diederik P. Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. 2016. β€œVariational Lossy Autoencoder.” In PRoceedings of ICLR.
Dasgupta, Sakyasingha, Takayuki Yoshizumi, and Takayuki Osogami. 2016. β€œRegularized Dynamic Boltzmann Machine with Delay Pruning for Unsupervised Learning of Temporal Sequences.” arXiv:1610.01989 [Cs, Stat], September.
Denton, Emily, Soumith Chintala, Arthur Szlam, and Rob Fergus. 2015. β€œDeep Generative Image Models Using a Laplacian Pyramid of Adversarial Networks.” arXiv:1506.05751 [Cs], June.
Dhariwal, Prafulla, and Alex Nichol. 2021. β€œDiffusion Models Beat GANs on Image Synthesis.” arXiv:2105.05233 [Cs, Stat], June.
Donahue, Chris, Julian McAuley, and Miller Puckette. 2019. β€œAdversarial Audio Synthesis.” In ICLR 2019.
Dosovitskiy, Alexey, Jost Tobias Springenberg, Maxim Tatarchenko, and Thomas Brox. 2014. β€œLearning to Generate Chairs, Tables and Cars with Convolutional Networks.” arXiv:1411.5928 [Cs], November.
Dutordoir, Vincent, James Hensman, Mark van der Wilk, Carl Henrik Ek, Zoubin Ghahramani, and Nicolas Durrande. 2021. β€œDeep Neural Networks as Point Estimates for Deep Gaussian Processes.” arXiv:2105.04504 [Cs, Stat], May.
Dutordoir, Vincent, Alan Saul, Zoubin Ghahramani, and Fergus Simpson. 2022. β€œNeural Diffusion Processes.” arXiv.
Dziugaite, Gintare Karolina, Daniel M. Roy, and Zoubin Ghahramani. 2015. β€œTraining Generative Neural Networks via Maximum Mean Discrepancy Optimization.” In Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence, 258–67. UAI’15. Arlington, Virginia, United States: AUAI Press.
Engel, Jesse, Kumar Krishna Agrawal, Shuo Chen, Ishaan Gulrajani, Chris Donahue, and Adam Roberts. 2019. β€œGANSynth: Adversarial Neural Audio Synthesis.” In Seventh International Conference on Learning Representations.
Engel, Jesse, Lamtharn (Hanoi) Hantrakul, Chenjie Gu, and Adam Roberts. 2019. β€œDDSP: Differentiable Digital Signal Processing.” In.
Engel, Jesse, Cinjon Resnick, Adam Roberts, Sander Dieleman, Douglas Eck, Karen Simonyan, and Mohammad Norouzi. 2017. β€œNeural Audio Synthesis of Musical Notes with WaveNet Autoencoders.” In PMLR.
FrΓΌhstΓΌck, Anna, Ibraheem Alhashim, and Peter Wonka. 2019. β€œTileGAN: Synthesis of Large-Scale Non-Homogeneous Textures.” arXiv:1904.12795 [Cs], April.
Gal, Yarin, and Zoubin Ghahramani. 2015. β€œOn Modern Deep Learning and Variational Inference.” In Advances in Approximate Bayesian Inference Workshop, NIPS.
Genevay, Aude, Gabriel PeyrΓ©, and Marco Cuturi. 2017. β€œLearning Generative Models with Sinkhorn Divergences.” arXiv:1706.00292 [Stat], October.
Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. 2014. β€œExplaining and Harnessing Adversarial Examples.” arXiv:1412.6572 [Cs, Stat], December.
Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. β€œGenerative Adversarial Nets.” In Advances in Neural Information Processing Systems 27, edited by Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, 2672–80. NIPS’14. Cambridge, MA, USA: Curran Associates, Inc.
Gulrajani, Ishaan, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. 2017. β€œImproved Training of Wasserstein GANs.” arXiv:1704.00028 [Cs, Stat], March.
Guo, Xin, Johnny Hong, Tianyi Lin, and Nan Yang. 2017. β€œRelaxed Wasserstein with Applications to GANs.” arXiv:1705.07164 [Cs, Stat], May.
Han, Xizewen, Huangjie Zheng, and Mingyuan Zhou. 2022. β€œCARD: Classification and Regression Diffusion Models.” arXiv.
He, Kun, Yan Wang, and John Hopcroft. 2016. β€œA Powerful Generative Model Using Random Weights for the Deep Image Representation.” In Advances in Neural Information Processing Systems.
Hinton, Geoffrey E. 2007. β€œLearning Multiple Layers of Representation.” Trends in Cognitive Sciences 11 (10): 428–34.
Ho, Jonathan, Ajay Jain, and Pieter Abbeel. 2020. β€œDenoising Diffusion Probabilistic Models.” arXiv:2006.11239 [Cs, Stat], December.
Hoffman, Matthew D, and Matthew J Johnson. 2016. β€œELBO Surgery: Yet Another Way to Carve up the Variational Evidence Lower Bound.” In Advances In Neural Information Processing Systems, 4.
Hoogeboom, Emiel, Alexey A. Gritsenko, Jasmijn Bastings, Ben Poole, Rianne van den Berg, and Tim Salimans. 2021. β€œAutoregressive Diffusion Models.” arXiv:2110.02037 [Cs, Stat], October.
Hu, Zhiting, Zichao Yang, Ruslan Salakhutdinov, and Eric P. Xing. 2018. β€œOn Unifying Deep Generative Models.” In arXiv:1706.00550 [Cs, Stat].
Husain, Hisham. 2020. β€œDistributional Robustness with IPMs and Links to Regularization and GANs.” arXiv:2006.04349 [Cs, Stat], June.
Husain, Hisham, Richard Nock, and Robert C. Williamson. 2019. β€œA Primal-Dual Link Between GANs and Autoencoders.” In Advances in Neural Information Processing Systems, 32:415–24.
HuszΓ‘r, Ferenc. 2015. β€œHow (Not) to Train Your Generative Model: Scheduled Sampling, Likelihood, Adversary?” arXiv:1511.05101 [Cs, Math, Stat], November.
HyvΓ€rinen, Aapo. 2005. β€œEstimation of Non-Normalized Statistical Models by Score Matching.” The Journal of Machine Learning Research 6 (December): 695–709.
Isola, Phillip, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2017. β€œImage-to-Image Translation with Conditional Adversarial Networks.” In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 5967–76.
Jalal, Ajil, Marius Arvinte, Giannis Daras, Eric Price, Alexandros G Dimakis, and Jon Tamir. 2021. β€œRobust Compressed Sensing MRI with Deep Generative Priors.” In Advances in Neural Information Processing Systems, 34:14938–54. Curran Associates, Inc.
Jayaram, Vivek, and John Thickstun. 2020. β€œSource Separation with Deep Generative Priors.” arXiv:2002.07942 [Cs, Stat], February.
Jetchev, Nikolay, Urs Bergmann, and Roland Vollgraf. 2016. β€œTexture Synthesis with Spatial Generative Adversarial Networks.” In Advances in Neural Information Processing Systems 29.
Ji, Kaiyi, and Yingbin Liang. 2018. β€œMinimax Estimation of Neural Net Distance,” November.
Jolicoeur-Martineau, Alexia, RΓ©mi PichΓ©-Taillefer, Ioannis Mitliagkas, and Remi Tachet des Combes. 2022. β€œAdversarial Score Matching and Improved Sampling for Image Generation.” In.
Karras, Tero, Samuli Laine, and Timo Aila. 2018. β€œA Style-Based Generator Architecture for Generative Adversarial Networks.” arXiv:1812.04948 [Cs, Stat], December.
Kim, Yoon, Sam Wiseman, Andrew C. Miller, David Sontag, and Alexander M. Rush. 2018. β€œSemi-Amortized Variational Autoencoders.” arXiv:1802.02550 [Cs, Stat], February.
Kingma, Durk P, and Prafulla Dhariwal. 2018. β€œGlow: Generative Flow with Invertible 1x1 Convolutions.” In Advances in Neural Information Processing Systems 31, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, 10236–45. Curran Associates, Inc.
Kodali, Naveen, Jacob Abernethy, James Hays, and Zsolt Kira. 2017. β€œOn Convergence and Stability of GANs.” arXiv:1705.07215 [Cs], December.
Krishnan, Rahul G., Uri Shalit, and David Sontag. 2017. β€œStructured Inference Networks for Nonlinear State Space Models.” In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, 2101–9.
Kulkarni, Tejas D., Will Whitney, Pushmeet Kohli, and Joshua B. Tenenbaum. 2015. β€œDeep Convolutional Inverse Graphics Network.” arXiv:1503.03167 [Cs], March.
Lee, Holden, Rong Ge, Tengyu Ma, Andrej Risteski, and Sanjeev Arora. 2017. β€œOn the Ability of Neural Nets to Express Distributions.” In arXiv:1702.07028 [Cs].
Lee, Honglak, Roger Grosse, Rajesh Ranganath, and Andrew Y. Ng. 2009. β€œConvolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations.” In Proceedings of the 26th Annual International Conference on Machine Learning, 609–16. ICML ’09. New York, NY, USA: ACM.
Lee, Hung-yi, and Yu Tsao. n.d. β€œGenerative Adversarial Network.” In, 222.
Li, Chun-Liang, Wei-Cheng Chang, Yu Cheng, Yiming Yang, and Barnabas Poczos. 2017. β€œMMD GAN: Towards Deeper Understanding of Moment Matching Network.” In Advances in Neural Information Processing Systems 30, edited by I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, 2203–13. Curran Associates, Inc.
Liang, Dawen, Rahul G. Krishnan, Matthew D. Hoffman, and Tony Jebara. 2018. β€œVariational Autoencoders for Collaborative Filtering.” In Proceedings of the 2018 World Wide Web Conference, 689–98. WWW ’18. Republic and Canton of Geneva, CHE: International World Wide Web Conferences Steering Committee.
Louizos, Christos, and Max Welling. 2016. β€œStructured and Efficient Variational Deep Learning with Matrix Gaussian Posteriors.” In arXiv Preprint arXiv:1603.04733, 1708–16.
Mirza, Mehdi, and Simon Osindero. 2014. β€œConditional Generative Adversarial Nets.” arXiv:1411.1784 [Cs, Stat], November.
Miyato, Takeru, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. 2018. β€œSpectral Normalization for Generative Adversarial Networks.” In ICLR 2018.
Mnih, Andriy, and Karol Gregor. 2014. β€œNeural Variational Inference and Learning in Belief Networks.” In Proceedings of The 31st International Conference on Machine Learning.
Mohamed, A. r, G. E. Dahl, and G. Hinton. 2012. β€œAcoustic Modeling Using Deep Belief Networks.” IEEE Transactions on Audio, Speech, and Language Processing 20 (1): 14–22.
Mohamed, Shakir, and Balaji Lakshminarayanan. 2016. β€œLearning in Implicit Generative Models,” November.
Nichol, Alex, and Prafulla Dhariwal. 2021. β€œImproved Denoising Diffusion Probabilistic Models.” arXiv:2102.09672 [Cs, Stat], February.
Oord, Aaron van den, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. 2016. β€œWaveNet: A Generative Model for Raw Audio.” In 9th ISCA Speech Synthesis Workshop.
Oord, AΓ€ron van den, Nal Kalchbrenner, and Koray Kavukcuoglu. 2016. β€œPixel Recurrent Neural Networks.” arXiv:1601.06759 [Cs], January.
Panaretos, Victor M., and Yoav Zemel. 2019. β€œStatistical Aspects of Wasserstein Distances.” Annual Review of Statistics and Its Application 6 (1): 405–31.
Papamakarios, George, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. 2021. β€œNormalizing Flows for Probabilistic Modeling and Inference.” Journal of Machine Learning Research 22 (57): 1–64.
Pascual, Santiago, Joan SerrΓ , and Antonio Bonafonte. 2019. β€œTowards Generalized Speech Enhancement with Generative Adversarial Networks.” arXiv:1904.03418 [Cs, Eess], April.
Pfau, David, and Oriol Vinyals. 2016. β€œConnecting Generative Adversarial Networks and Actor-Critic Methods.” arXiv:1610.01945 [Cs, Stat], October.
Poole, Ben, Alexander A. Alemi, Jascha Sohl-Dickstein, and Anelia Angelova. 2016. β€œImproved Generator Objectives for GANs.” In Advances in Neural Information Processing Systems 29.
Prenger, Ryan, Rafael Valle, and Bryan Catanzaro. 2018. β€œWaveGlow: A Flow-Based Generative Network for Speech Synthesis.” arXiv:1811.00002 [Cs, Eess, Stat], October.
Radford, Alec, Luke Metz, and Soumith Chintala. 2015. β€œUnsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks.” In arXiv:1511.06434 [Cs].
Ramasinghe, Sameera, Kanchana Nisal Ranasinghe, Salman Khan, Nick Barnes, and Stephen Gould. 2020. β€œConditional Generative Modeling via Learning the Latent Space.” In.
Ranganath, Rajesh, Dustin Tran, Jaan Altosaar, and David Blei. 2016. β€œOperator Variational Inference.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 496–504. Curran Associates, Inc.
Rezende, Danilo Jimenez, Shakir Mohamed, and Daan Wierstra. 2015. β€œStochastic Backpropagation and Approximate Inference in Deep Generative Models.” In Proceedings of ICML.
Salakhutdinov, Ruslan. 2015. β€œLearning Deep Generative Models.” Annual Review of Statistics and Its Application 2 (1): 361–85.
Salimans, Tim, Diederik Kingma, and Max Welling. 2015. β€œMarkov Chain Monte Carlo and Variational Inference: Bridging the Gap.” In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), 1218–26. ICML’15. Lille, France: JMLR.org.
Sohl-Dickstein, Jascha, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. 2015. β€œDeep Unsupervised Learning Using Nonequilibrium Thermodynamics.” arXiv:1503.03585 [Cond-Mat, q-Bio, Stat], November.
Song, Jiaming, Chenlin Meng, and Stefano Ermon. 2021. β€œDenoising Diffusion Implicit Models.” arXiv:2010.02502 [Cs], November.
Song, Yang, Conor Durkan, Iain Murray, and Stefano Ermon. 2021. β€œMaximum Likelihood Training of Score-Based Diffusion Models.” In Advances in Neural Information Processing Systems.
Song, Yang, and Stefano Ermon. 2020a. β€œGenerative Modeling by Estimating Gradients of the Data Distribution.” In Advances In Neural Information Processing Systems. arXiv.
β€”β€”β€”. 2020b. β€œImproved Techniques for Training Score-Based Generative Models.” In Advances In Neural Information Processing Systems. arXiv.
Song, Yang, Sahaj Garg, Jiaxin Shi, and Stefano Ermon. 2019. β€œSliced Score Matching: A Scalable Approach to Density and Score Estimation.” arXiv.
Song, Yang, Liyue Shen, Lei Xing, and Stefano Ermon. 2022. β€œSolving Inverse Problems in Medical Imaging with Score-Based Generative Models.” In. arXiv.
Song, Yang, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. 2022. β€œScore-Based Generative Modeling Through Stochastic Differential Equations.” In.
Sun, Zheng, Jiaqi Liu, Zewang Zhang, Jingwen Chen, Zhao Huo, Ching Hua Lee, and Xiao Zhang. 2016. β€œComposing Music with Grammar Argumented Neural Networks and Note-Level Encoding.” arXiv:1611.05416 [Cs], November.
Sutherland, Dougal J., Hsiao-Yu Tung, Heiko Strathmann, Soumyajit De, Aaditya Ramdas, Alex Smola, and Arthur Gretton. 2017. β€œGenerative Models and Model Criticism via Optimized Maximum Mean Discrepancy.” In Proceedings of ICLR.
Swersky, Kevin, Marc’Aurelio Ranzato, David Buchman, Nando D. Freitas, and Benjamin M. Marlin. 2011. β€œOn Autoencoders and Score Matching for Energy Based Models.” In Proceedings of the 28th International Conference on Machine Learning (ICML-11), 1201–8.
Theis, Lucas, and Matthias Bethge. 2015. β€œGenerative Image Modeling Using Spatial LSTMs.” arXiv:1506.03478 [Cs, Stat], June.
Tran, Dustin, Matthew D. Hoffman, Rif A. Saurous, Eugene Brevdo, Kevin Murphy, and David M. Blei. 2017. β€œDeep Probabilistic Programming.” In ICLR.
Ullrich, K. 2020. β€œA Coding Perspective on Deep Latent Variable Models.”
Vincent, Pascal. 2011. β€œA connection between score matching and denoising autoencoders.” Neural Computation 23 (7): 1661–74.
Wang, Chuang, Hong Hu, and Yue M. Lu. 2019. β€œA Solvable High-Dimensional Model of GAN.” arXiv:1805.08349 [Cond-Mat, Stat], October.
Wang, Prince Zizhuang, and William Yang Wang. 2019. β€œRiemannian Normalizing Flow on Variational Wasserstein Autoencoder for Text Modeling.” In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 284–94. Minneapolis, Minnesota: Association for Computational Linguistics.
Wu, Yan, Mihaela Rosca, and Timothy Lillicrap. 2019. β€œDeep Compressed Sensing.” In International Conference on Machine Learning, 6850–60.
Xie, Jianwen, Ruiqi Gao, Erik Nijkamp, Song-Chun Zhu, and Ying Nian Wu. 2020. β€œRepresentation Learning: A Statistical Perspective.” Annual Review of Statistics and Its Application 7 (1): 303–35.
Yang, Li-Chia, Szu-Yu Chou, and Yi-Hsuan Yang. 2017. β€œMidiNet: A Convolutional Generative Adversarial Network for Symbolic-Domain Music Generation.” In Proceedings of the 18th International Society for Music Information Retrieval Conference (ISMIR’2017), Suzhou, China.
Yang, Ling, Zhilong Zhang, Shenda Hong, Runsheng Xu, Yue Zhao, Yingxia Shao, Wentao Zhang, Ming-Hsuan Yang, and Bin Cui. 2022. β€œDiffusion Models: A Comprehensive Survey of Methods and Applications.” arXiv.
Yang, Liu, Dongkun Zhang, and George Em Karniadakis. 2020. β€œPhysics-Informed Generative Adversarial Networks for Stochastic Differential Equations.” SIAM Journal on Scientific Computing 42 (1): A292–317.
Yang, Mengyue, Furui Liu, Zhitang Chen, Xinwei Shen, Jianye Hao, and Jun Wang. 2020. β€œCausalVAE: Disentangled Representation Learning via Neural Structural Causal Models.” arXiv:2004.08697 [Cs, Stat], July.
YΔ±ldΔ±z, Γ‡ağatay, Markus Heinonen, and Harri LΓ€hdesmΓ€ki. 2019. β€œODE\(^2\)VAE: Deep Generative Second Order ODEs with Bayesian Neural Networks.” arXiv:1905.10994 [Cs, Stat], October.
Zhou, Cong, Michael Horgan, Vivek Kumar, Cristina Vasco, and Dan Darcy. 2018. β€œVoice Conversion with Conditional SampleRNN.” arXiv:1808.08311 [Cs, Eess], August.
Zhu, B., J. Jiao, and D. Tse. 2020. β€œDeconstructing Generative Adversarial Networks.” IEEE Transactions on Information Theory 66 (11): 7155–79.
Zhu, Jun-Yan, Philipp KrΓ€henbΓΌhl, Eli Shechtman, and Alexei A. Efros. 2016. β€œGenerative Visual Manipulation on the Natural Image Manifold.” In Proceedings of European Conference on Computer Vision.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.