Generative art with language+diffusion models



generative art using DALLE2, stable diffusion, midjourney etc, which are diffusion + transformer models.

CLIP presumably goes here.

References

Dhariwal, Prafulla, and Alex Nichol. 2021. β€œDiffusion Models Beat GANs on Image Synthesis.” arXiv:2105.05233 [Cs, Stat], June.
Dutordoir, Vincent, Alan Saul, Zoubin Ghahramani, and Fergus Simpson. 2022. β€œNeural Diffusion Processes.” arXiv.
Han, Xizewen, Huangjie Zheng, and Mingyuan Zhou. 2022. β€œCARD: Classification and Regression Diffusion Models.” arXiv.
Ho, Jonathan, Ajay Jain, and Pieter Abbeel. 2020. β€œDenoising Diffusion Probabilistic Models.” arXiv:2006.11239 [Cs, Stat], December.
Hoogeboom, Emiel, Alexey A. Gritsenko, Jasmijn Bastings, Ben Poole, Rianne van den Berg, and Tim Salimans. 2021. β€œAutoregressive Diffusion Models.” arXiv:2110.02037 [Cs, Stat], October.
Nichol, Alex, and Prafulla Dhariwal. 2021. β€œImproved Denoising Diffusion Probabilistic Models.” arXiv:2102.09672 [Cs, Stat], February.
Sohl-Dickstein, Jascha, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. 2015. β€œDeep Unsupervised Learning Using Nonequilibrium Thermodynamics.” arXiv:1503.03585 [Cond-Mat, q-Bio, Stat], November.
Song, Jiaming, Chenlin Meng, and Stefano Ermon. 2021. β€œDenoising Diffusion Implicit Models.” arXiv:2010.02502 [Cs], November.
Song, Yang, and Stefano Ermon. 2020a. β€œGenerative Modeling by Estimating Gradients of the Data Distribution.” In Advances In Neural Information Processing Systems. arXiv.
β€”β€”β€”. 2020b. β€œImproved Techniques for Training Score-Based Generative Models.” In Advances In Neural Information Processing Systems. arXiv.
Yang, Ling, Zhilong Zhang, Shenda Hong, Runsheng Xu, Yue Zhao, Yingxia Shao, Wentao Zhang, Ming-Hsuan Yang, and Bin Cui. 2022. β€œDiffusion Models: A Comprehensive Survey of Methods and Applications.” arXiv.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.