Deep generative models



Generating a synthetic observation at great depth

Certain famous models in neural nets are generative — informally, they produce samples some distribution, and the distribution of those samples is tweaks until its distribution resembles, in some sense, the distribution of our observed data. There are many attempts now to unify fancy generative techniques such as GANs and VAEs into a single unified method, or at least a cordial family of methods, so I had better devise a page for that.

Here I mean generative in the sense that “this model will (approximately) simulate from the true distribution of interest”, which is somewhat weaker that the requirements of, e.g., MC Bayesian inference, where we assume that we can access likelihoods, or at least likelihood gradients. Here we might have no likelihood at all, or variational approximations to likelihood or whatever.

Observations arising from unobserved latent factors

Tangent: Learning problems involve composition of differentiating and integrating various terms that measure various properties of how well you have approximated the state of the world. Probabilistic neural networks leverage combinations of integrals that we can solve by Monte Carlo, and derivatives that we can solve via automatic differentiation, which are both fast-i sh on modern hardware In cunning combination these find approximate solutions to some very interesting problems in calculus. Although… There is something odd about that setup. From this perspective the generative models (such as GANs and autoencoders) solve an intractable integral by simulating samples probabilistically from it, in lieu of processing the continuous, unknowable, intractable integral that we actually wish to solve. But that continuous intractable integral was in any case a contrivance, a thought experiment imagining a world populated with such weird Platonic objects as integrals-over-possible-states-of-the-world which only mathematicians would consider reasonable. The world we live in has, as far as I know, no such thing. We do not have a world where the things we observe are stochastic samples from an ineffable probability density, but rather the observations themselves are the phenomena, and the probability density over them is a weird abstraction. It must look deeply odd from the outside when we to talk about how we are solving integrals by looking at data, instead of solving data by looking at integrals.

References

Adler, Jonas, and Sebastian Lunz. 2018. “Banach Wasserstein GAN,” June. https://arxiv.org/abs/1806.06621v2.
Arjovsky, Martin, and Léon Bottou. 2017. “Towards Principled Methods for Training Generative Adversarial Networks.” January 17, 2017. http://arxiv.org/abs/1701.04862.
Arjovsky, Martin, Soumith Chintala, and Léon Bottou. 2017. “Wasserstein Generative Adversarial Networks.” In International Conference on Machine Learning, 214–23. http://proceedings.mlr.press/v70/arjovsky17a.html.
Arora, Sanjeev, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. 2017. “Generalization and Equilibrium in Generative Adversarial Nets (GANs).” March 1, 2017. http://arxiv.org/abs/1703.00573.
Arora, Sanjeev, Yingyu Liang, and Tengyu Ma. 2015. “Why Are Deep Nets Reversible: A Simple Theory, with Implications for Training.” November 17, 2015. http://arxiv.org/abs/1511.05653.
Bach, Stephen H., Bryan He, Alexander Ratner, and Christopher Ré. 2017. “Learning the Structure of Generative Models Without Labeled Data.” In Proceedings of the 34th International Conference on Machine Learning. International Conference on Machine Learning, Sydney, Australia. http://arxiv.org/abs/1703.00854.
Bahadori, Mohammad Taha, Krzysztof Chalupka, Edward Choi, Robert Chen, Walter F. Stewart, and Jimeng Sun. 2017. “Neural Causal Regularization Under the Independence of Mechanisms Assumption.” February 8, 2017. http://arxiv.org/abs/1702.02604.
Baydin, Atılım Güneş, Lei Shao, Wahid Bhimji, Lukas Heinrich, Lawrence Meadows, Jialin Liu, Andreas Munk, et al. 2019. “Etalumis: Bringing Probabilistic Programming to Scientific Simulators at Scale.” In. http://arxiv.org/abs/1907.03382.
Bora, Ashish, Ajil Jalal, Eric Price, and Alexandros G. Dimakis. 2017. “Compressed Sensing Using Generative Models.” In International Conference on Machine Learning, 537–46. http://arxiv.org/abs/1703.03208.
Bowman, Samuel R., Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, and Samy Bengio. 2015. “Generating Sentences from a Continuous Space.” November 19, 2015. http://arxiv.org/abs/1511.06349.
Burda, Yuri, Roger Grosse, and Ruslan Salakhutdinov. 2016. “Importance Weighted Autoencoders.” In. http://arxiv.org/abs/1509.00519.
Caterini, Anthony L., Arnaud Doucet, and Dino Sejdinovic. 2018. “Hamiltonian Variational Auto-Encoder.” In Advances in Neural Information Processing Systems. http://arxiv.org/abs/1805.11328.
Chen, Xi, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. 2016. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, R. Garnett, and R. Garnett, 2172–80. Curran Associates, Inc. http://papers.nips.cc/paper/6399-infogan-interpretable-representation-learning-by-information-maximizing-generative-adversarial-nets.pdf.
Chen, Xi, Diederik P. Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. 2016. “Variational Lossy Autoencoder.” In PRoceedings of ICLR. http://arxiv.org/abs/1611.02731.
Dasgupta, Sakyasingha, Takayuki Yoshizumi, and Takayuki Osogami. 2016. “Regularized Dynamic Boltzmann Machine with Delay Pruning for Unsupervised Learning of Temporal Sequences.” September 22, 2016. http://arxiv.org/abs/1610.01989.
Denton, Emily, Soumith Chintala, Arthur Szlam, and Rob Fergus. 2015. “Deep Generative Image Models Using a Laplacian Pyramid of Adversarial Networks.” June 18, 2015. http://arxiv.org/abs/1506.05751.
Donahue, Chris, Julian McAuley, and Miller Puckette. 2019. “Adversarial Audio Synthesis.” In ICLR 2019. http://arxiv.org/abs/1802.04208.
Dosovitskiy, Alexey, Jost Tobias Springenberg, Maxim Tatarchenko, and Thomas Brox. 2014. “Learning to Generate Chairs, Tables and Cars with Convolutional Networks.” November 21, 2014. http://arxiv.org/abs/1411.5928.
Dutordoir, Vincent, James Hensman, Mark van der Wilk, Carl Henrik Ek, Zoubin Ghahramani, and Nicolas Durrande. 2021. “Deep Neural Networks as Point Estimates for Deep Gaussian Processes.” May 10, 2021. http://arxiv.org/abs/2105.04504.
Dziugaite, Gintare Karolina, Daniel M. Roy, and Zoubin Ghahramani. 2015. “Training Generative Neural Networks via Maximum Mean Discrepancy Optimization.” In Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence, 258–67. UAI’15. Arlington, Virginia, United States: AUAI Press. http://arxiv.org/abs/1505.03906.
Engel, Jesse, Kumar Krishna Agrawal, Shuo Chen, Ishaan Gulrajani, Chris Donahue, and Adam Roberts. 2019. GANSynth: Adversarial Neural Audio Synthesis.” In Seventh International Conference on Learning Representations. http://arxiv.org/abs/1902.08710.
Engel, Jesse, Lamtharn (Hanoi) Hantrakul, Chenjie Gu, and Adam Roberts. 2019. DDSP: Differentiable Digital Signal Processing.” In. https://openreview.net/forum?id=B1x1ma4tDr.
Engel, Jesse, Cinjon Resnick, Adam Roberts, Sander Dieleman, Douglas Eck, Karen Simonyan, and Mohammad Norouzi. 2017. “Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders.” In PMLR. http://arxiv.org/abs/1704.01279.
Frühstück, Anna, Ibraheem Alhashim, and Peter Wonka. 2019. TileGAN: Synthesis of Large-Scale Non-Homogeneous Textures.” April 29, 2019. https://doi.org/10.1145/3306346.3322993.
Gal, Yarin, and Zoubin Ghahramani. 2015. “On Modern Deep Learning and Variational Inference.” In Advances in Approximate Bayesian Inference Workshop, NIPS.
Genevay, Aude, Gabriel Peyré, and Marco Cuturi. 2017. “Learning Generative Models with Sinkhorn Divergences.” October 20, 2017. http://arxiv.org/abs/1706.00292.
Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. 2014. “Explaining and Harnessing Adversarial Examples.” December 19, 2014. http://arxiv.org/abs/1412.6572.
Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. “Generative Adversarial Nets.” In Advances in Neural Information Processing Systems 27, edited by Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, 2672–80. NIPS’14. Cambridge, MA, USA: Curran Associates, Inc. http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf.
Gulrajani, Ishaan, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. 2017. “Improved Training of Wasserstein GANs.” March 31, 2017. http://arxiv.org/abs/1704.00028.
Guo, Xin, Johnny Hong, Tianyi Lin, and Nan Yang. 2017. “Relaxed Wasserstein with Applications to GANs.” May 19, 2017. http://arxiv.org/abs/1705.07164.
He, Kun, Yan Wang, and John Hopcroft. 2016. “A Powerful Generative Model Using Random Weights for the Deep Image Representation.” In Advances in Neural Information Processing Systems. http://arxiv.org/abs/1606.04801.
Hinton, Geoffrey E. 2007. “Learning Multiple Layers of Representation.” Trends in Cognitive Sciences 11 (10): 428–34. https://doi.org/10.1016/j.tics.2007.09.004.
Hoffman, Matthew D, and Matthew J Johnson. 2016. ELBO Surgery: Yet Another Way to Carve up the Variational Evidence Lower Bound.” In Advances In Neural Information Processing Systems, 4. http://approximateinference.org/accepted/HoffmanJohnson2016.pdf.
Hu, Zhiting, Zichao Yang, Ruslan Salakhutdinov, and Eric P. Xing. 2018. “On Unifying Deep Generative Models.” In. http://arxiv.org/abs/1706.00550.
Husain, Hisham. 2020. “Distributional Robustness with IPMs and Links to Regularization and GANs.” June 8, 2020. http://arxiv.org/abs/2006.04349.
Husain, Hisham, Richard Nock, and Robert C. Williamson. 2019. “A Primal-Dual Link Between GANs and Autoencoders.” In Advances in Neural Information Processing Systems, 32:415–24. https://proceedings.neurips.cc/paper/2019/hash/eae27d77ca20db309e056e3d2dcd7d69-Abstract.html.
Huszár, Ferenc. 2015. “How (not) to Train Your Generative Model: Scheduled Sampling, Likelihood, Adversary?” November 16, 2015. http://arxiv.org/abs/1511.05101.
Isola, Phillip, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2017. “Image-to-Image Translation with Conditional Adversarial Networks.” In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 5967–76. https://doi.org/10.1109/CVPR.2017.632.
Jayaram, Vivek, and John Thickstun. 2020. “Source Separation with Deep Generative Priors.” February 18, 2020. http://arxiv.org/abs/2002.07942.
Jetchev, Nikolay, Urs Bergmann, and Roland Vollgraf. 2016. “Texture Synthesis with Spatial Generative Adversarial Networks.” In Advances in Neural Information Processing Systems 29. http://arxiv.org/abs/1611.08207.
Ji, Kaiyi, and Yingbin Liang. 2018. “Minimax Estimation of Neural Net Distance,” November. https://arxiv.org/abs/1811.01054v1.
Karras, Tero, Samuli Laine, and Timo Aila. 2018. “A Style-Based Generator Architecture for Generative Adversarial Networks.” December 12, 2018. http://arxiv.org/abs/1812.04948.
Kim, Yoon, Sam Wiseman, Andrew C. Miller, David Sontag, and Alexander M. Rush. 2018. “Semi-Amortized Variational Autoencoders.” February 7, 2018. http://arxiv.org/abs/1802.02550.
Kingma, Durk P, and Prafulla Dhariwal. 2018. “Glow: Generative Flow with Invertible 1x1 Convolutions.” In Advances in Neural Information Processing Systems 31, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, 10236–45. Curran Associates, Inc. http://papers.nips.cc/paper/8224-glow-generative-flow-with-invertible-1x1-convolutions.pdf.
Kodali, Naveen, Jacob Abernethy, James Hays, and Zsolt Kira. 2017. “On Convergence and Stability of GANs.” December 10, 2017. http://arxiv.org/abs/1705.07215.
Krishnan, Rahul G., Uri Shalit, and David Sontag. 2017. “Structured Inference Networks for Nonlinear State Space Models.” In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, 2101–9. http://arxiv.org/abs/1609.09869.
Kulkarni, Tejas D., Will Whitney, Pushmeet Kohli, and Joshua B. Tenenbaum. 2015. “Deep Convolutional Inverse Graphics Network.” March 11, 2015. http://arxiv.org/abs/1503.03167.
Lee, Holden, Rong Ge, Tengyu Ma, Andrej Risteski, and Sanjeev Arora. 2017. “On the Ability of Neural Nets to Express Distributions.” In. http://arxiv.org/abs/1702.07028.
Lee, Honglak, Roger Grosse, Rajesh Ranganath, and Andrew Y. Ng. 2009. “Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations.” In Proceedings of the 26th Annual International Conference on Machine Learning, 609–16. ICML ’09. New York, NY, USA: ACM. https://doi.org/10.1145/1553374.1553453.
Lee, Hung-yi, and Yu Tsao. n.d. “Generative Adversarial Network.” In, 222.
Li, Chun-Liang, Wei-Cheng Chang, Yu Cheng, Yiming Yang, and Barnabas Poczos. 2017. MMD GAN: Towards Deeper Understanding of Moment Matching Network.” In Advances in Neural Information Processing Systems 30, edited by I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, 2203–13. Curran Associates, Inc. http://papers.nips.cc/paper/6815-mmd-gan-towards-deeper-understanding-of-moment-matching-network.pdf.
Liang, Dawen, Rahul G. Krishnan, Matthew D. Hoffman, and Tony Jebara. 2018. “Variational Autoencoders for Collaborative Filtering.” In Proceedings of the 2018 World Wide Web Conference, 689–98. WWW ’18. Republic and Canton of Geneva, CHE: International World Wide Web Conferences Steering Committee. https://doi.org/10.1145/3178876.3186150.
Louizos, Christos, and Max Welling. 2016. “Structured and Efficient Variational Deep Learning with Matrix Gaussian Posteriors.” In, 1708–16. http://arxiv.org/abs/1603.04733.
Mirza, Mehdi, and Simon Osindero. 2014. “Conditional Generative Adversarial Nets.” November 6, 2014. http://arxiv.org/abs/1411.1784.
Miyato, Takeru, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. 2018. “Spectral Normalization for Generative Adversarial Networks.” In ICLR 2018. http://arxiv.org/abs/1802.05957.
Mnih, Andriy, and Karol Gregor. 2014. “Neural Variational Inference and Learning in Belief Networks.” In Proceedings of The 31st International Conference on Machine Learning. http://www.jmlr.org/proceedings/papers/v32/mnih14.html.
Mohamed, A. r, G. E. Dahl, and G. Hinton. 2012. “Acoustic Modeling Using Deep Belief Networks.” IEEE Transactions on Audio, Speech, and Language Processing 20 (1): 14–22. https://doi.org/10.1109/TASL.2011.2109382.
Mohamed, Shakir, and Balaji Lakshminarayanan. 2016. “Learning in Implicit Generative Models,” November. https://arxiv.org/abs/1610.03483v4.
Oord, Aaron van den, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. 2016. WaveNet: A Generative Model for Raw Audio.” In 9th ISCA Speech Synthesis Workshop. http://arxiv.org/abs/1609.03499.
Oord, Aäron van den, Nal Kalchbrenner, and Koray Kavukcuoglu. 2016. “Pixel Recurrent Neural Networks.” January 25, 2016. http://arxiv.org/abs/1601.06759.
Panaretos, Victor M., and Yoav Zemel. 2019. “Statistical Aspects of Wasserstein Distances.” Annual Review of Statistics and Its Application 6 (1): 405–31. https://doi.org/10.1146/annurev-statistics-030718-104938.
Papamakarios, George, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. 2019. “Normalizing Flows for Probabilistic Modeling and Inference.” December 5, 2019. http://arxiv.org/abs/1912.02762.
Pascual, Santiago, Joan Serrà, and Antonio Bonafonte. 2019. “Towards Generalized Speech Enhancement with Generative Adversarial Networks.” April 6, 2019. http://arxiv.org/abs/1904.03418.
Pfau, David, and Oriol Vinyals. 2016. “Connecting Generative Adversarial Networks and Actor-Critic Methods.” October 6, 2016. http://arxiv.org/abs/1610.01945.
Poole, Ben, Alexander A. Alemi, Jascha Sohl-Dickstein, and Anelia Angelova. 2016. “Improved Generator Objectives for GANs.” In Advances in Neural Information Processing Systems 29. http://arxiv.org/abs/1612.02780.
Prenger, Ryan, Rafael Valle, and Bryan Catanzaro. 2018. WaveGlow: A Flow-Based Generative Network for Speech Synthesis.” October 30, 2018. http://arxiv.org/abs/1811.00002.
Radford, Alec, Luke Metz, and Soumith Chintala. 2015. “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks.” In. http://arxiv.org/abs/1511.06434.
Ramasinghe, Sameera, Kanchana Nisal Ranasinghe, Salman Khan, Nick Barnes, and Stephen Gould. 2020. “Conditional Generative Modeling via Learning the Latent Space.” In. https://openreview.net/forum?id=VJnrYcnRc6.
Ranganath, Rajesh, Dustin Tran, Jaan Altosaar, and David Blei. 2016. “Operator Variational Inference.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 496–504. Curran Associates, Inc. http://papers.nips.cc/paper/6091-operator-variational-inference.pdf.
Rezende, Danilo Jimenez, Shakir Mohamed, and Daan Wierstra. 2015. “Stochastic Backpropagation and Approximate Inference in Deep Generative Models.” In Proceedings of ICML. http://arxiv.org/abs/1401.4082.
Salakhutdinov, Ruslan. 2015. “Learning Deep Generative Models.” Annual Review of Statistics and Its Application 2 (1): 361–85. https://doi.org/10.1146/annurev-statistics-010814-020120.
Salimans, Tim, Diederik Kingma, and Max Welling. 2015. “Markov Chain Monte Carlo and Variational Inference: Bridging the Gap.” In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), 1218–26. ICML’15. Lille, France: JMLR.org. http://proceedings.mlr.press/v37/salimans15.html.
Sun, Zheng, Jiaqi Liu, Zewang Zhang, Jingwen Chen, Zhao Huo, Ching Hua Lee, and Xiao Zhang. 2016. “Composing Music with Grammar Argumented Neural Networks and Note-Level Encoding.” November 16, 2016. http://arxiv.org/abs/1611.05416.
Sutherland, Dougal J., Hsiao-Yu Tung, Heiko Strathmann, Soumyajit De, Aaditya Ramdas, Alex Smola, and Arthur Gretton. 2017. “Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy.” In Proceedings of ICLR. http://arxiv.org/abs/1611.04488.
Theis, Lucas, and Matthias Bethge. 2015. “Generative Image Modeling Using Spatial LSTMs.” June 10, 2015. http://arxiv.org/abs/1506.03478.
Tran, Dustin, Matthew D. Hoffman, Rif A. Saurous, Eugene Brevdo, Kevin Murphy, and David M. Blei. 2017. “Deep Probabilistic Programming.” In ICLR. http://arxiv.org/abs/1701.03757.
Ullrich, K. 2020. “A Coding Perspective on Deep Latent Variable Models.” https://dare.uva.nl/search?identifier=2d6e0b96-90d3-4683-bbbe-00d2a7f1dd54.
Wang, Chuang, Hong Hu, and Yue M. Lu. 2019. “A Solvable High-Dimensional Model of GAN.” October 28, 2019. http://arxiv.org/abs/1805.08349.
Wang, Prince Zizhuang, and William Yang Wang. 2019. “Riemannian Normalizing Flow on Variational Wasserstein Autoencoder for Text Modeling.” In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 284–94. Minneapolis, Minnesota: Association for Computational Linguistics. https://doi.org/10.18653/v1/N19-1025.
Wu, Yan, Mihaela Rosca, and Timothy Lillicrap. 2019. “Deep Compressed Sensing.” In International Conference on Machine Learning, 6850–60. http://arxiv.org/abs/1905.06723.
Xie, Jianwen, Ruiqi Gao, Erik Nijkamp, Song-Chun Zhu, and Ying Nian Wu. 2020. “Representation Learning: A Statistical Perspective.” Annual Review of Statistics and Its Application 7 (1): 303–35. https://doi.org/10.1146/annurev-statistics-031219-041131.
Yang, Li-Chia, Szu-Yu Chou, and Yi-Hsuan Yang. 2017. MidiNet: A Convolutional Generative Adversarial Network for Symbolic-Domain Music Generation.” In Proceedings of the 18th International Society for Music Information Retrieval Conference (ISMIR’2017), Suzhou, China. http://arxiv.org/abs/1703.10847.
Yang, Liu, Dongkun Zhang, and George Em Karniadakis. 2020. “Physics-Informed Generative Adversarial Networks for Stochastic Differential Equations.” SIAM Journal on Scientific Computing 42 (1): A292–317. https://doi.org/10.1137/18M1225409.
Yang, Mengyue, Furui Liu, Zhitang Chen, Xinwei Shen, Jianye Hao, and Jun Wang. 2020. CausalVAE: Disentangled Representation Learning via Neural Structural Causal Models.” July 1, 2020. http://arxiv.org/abs/2004.08697.
Yıldız, Çağatay, Markus Heinonen, and Harri Lähdesmäki. 2019. ODE$2̂$VAE: Deep Generative Second Order ODEs with Bayesian Neural Networks.” October 24, 2019. http://arxiv.org/abs/1905.10994.
Zhou, Cong, Michael Horgan, Vivek Kumar, Cristina Vasco, and Dan Darcy. 2018. “Voice Conversion with Conditional SampleRNN.” August 24, 2018. http://arxiv.org/abs/1808.08311.
Zhu, B., J. Jiao, and D. Tse. 2020. “Deconstructing Generative Adversarial Networks.” IEEE Transactions on Information Theory 66 (11): 7155–79. https://doi.org/10.1109/TIT.2020.2983698.
Zhu, Jun-Yan, Philipp Krähenbühl, Eli Shechtman, and Alexei A. Efros. 2016. “Generative Visual Manipulation on the Natural Image Manifold.” In Proceedings of European Conference on Computer Vision. http://arxiv.org/abs/1609.03552.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.