Certain famous models in neural nets are generative — informally, they produce samples some distribution, and the distribution of those samples is tweaks until it resembles, say, the distribution of our observed data.

Tangent: Learning problems involve composition of differentiating and integrating various terms that measure various properties of how well you have approximated the state of the world.
Probabilistic neural networks find combinations of integrals that we can solve by Monte Carlo, and derivatives that we can solve via automatic differentiation, and which are both fast on modern hardware, and then use those cunning combination to find approximate solutions that we would have probably phrased in terms of specific integrals that are in practice completely intractable.
The result is machine learning in strange and wonderful places where we could not have solved those integrals and derivatives in the traditional manner.
Although… There *is* something odd about that setup.
From this perspective the generative models (such as GANs and autoencoders) are solving an intractable integral by simulating samples probabilistically from them, in lieu of processing the continuous, unknowable, intractable integral that we actually wish to solve.
But that continuous intractable integral was in any case a contrivance, a thought experiment imagining a world populated with such weird Platonic objects as integrals-over-possible-states-of-the-world which only mathematicians would consider reasonable.
The world we live in has, as far as I know, no such thing.
We do not have a world where the things we observe are stochastic samples from an ineffable probability density, but rather the observations themselves are the phenomena, and the probability density over them is a weird abstraction.
It must look deeply odd from the outside when we to talk about how we are solving integrals by looking at data, instead of solving data by looking at integrals.

Historically I have considered such models mostly GANs and autoencoders, but there are still more flavours of model, and moreover, the division between GANs and VAE is probably uselessly blurry by now.

## References

*International Conference on Machine Learning*, 214–23. http://proceedings.mlr.press/v70/arjovsky17a.html.

*Proceedings of the 34th International Conference on Machine Learning*. International Conference on Machine Learning, Sydney, Australia. http://arxiv.org/abs/1703.00854.

*International Conference on Machine Learning*, 537–46. http://arxiv.org/abs/1703.03208.

*Advances in Neural Information Processing Systems 29*, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, R. Garnett, and R. Garnett, 2172–80. Curran Associates, Inc. http://papers.nips.cc/paper/6399-infogan-interpretable-representation-learning-by-information-maximizing-generative-adversarial-nets.pdf.

*PRoceedings of ICLR*. http://arxiv.org/abs/1611.02731.

*ICLR 2019*. http://arxiv.org/abs/1802.04208.

*Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence*, 258–67. UAI’15. Arlington, Virginia, United States: AUAI Press. http://arxiv.org/abs/1505.03906.

*Seventh International Conference on Learning Representations*. http://arxiv.org/abs/1902.08710.

*PMLR*. http://arxiv.org/abs/1704.01279.

*Advances in Neural Information Processing Systems 29*, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 2199–2207. Curran Associates, Inc. http://papers.nips.cc/paper/6039-sequential-neural-models-with-stochastic-layers.pdf.

*Advances in Approximate Bayesian Inference Workshop, NIPS*.

*4th International Conference on Learning Representations (ICLR) Workshop Track*. http://arxiv.org/abs/1506.02158.

*Advances in Neural Information Processing Systems 27*, edited by Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, 2672–80. NIPS’14. Cambridge, MA, USA: Curran Associates, Inc. http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf.

*Advances in Neural Information Processing Systems*. http://arxiv.org/abs/1606.04801.

*Trends in Cognitive Sciences*11 (10): 428–34. https://doi.org/10.1016/j.tics.2007.09.004.

*Advances in Neural Information Processing Systems*, 32:415–24. https://proceedings.neurips.cc/paper/2019/hash/eae27d77ca20db309e056e3d2dcd7d69-Abstract.html.

*2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, 5967–76. https://doi.org/10.1109/CVPR.2017.632.

*Advances in Neural Information Processing Systems 29*. http://arxiv.org/abs/1611.08207.

*Advances in Neural Information Processing Systems 31*, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, 10236–45. Curran Associates, Inc. http://papers.nips.cc/paper/8224-glow-generative-flow-with-invertible-1x1-convolutions.pdf.

*Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence*, 2101–9. http://arxiv.org/abs/1609.09869.

*Proceedings of the 26th Annual International Conference on Machine Learning*, 609–16. ICML ’09. New York, NY, USA: ACM. https://doi.org/10.1145/1553374.1553453.

*Advances in Neural Information Processing Systems 30*, edited by I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, 2203–13. Curran Associates, Inc. http://papers.nips.cc/paper/6815-mmd-gan-towards-deeper-understanding-of-moment-matching-network.pdf.

*WWW*. http://arxiv.org/abs/1802.05814.

*ICLR 2018*. http://arxiv.org/abs/1802.05957.

*Proceedings of The 31st International Conference on Machine Learning*. http://www.jmlr.org/proceedings/papers/v32/mnih14.html.

*IEEE Transactions on Audio, Speech, and Language Processing*20 (1): 14–22. https://doi.org/10.1109/TASL.2011.2109382.

*Annual Review of Statistics and Its Application*6 (1): 405–31. https://doi.org/10.1146/annurev-statistics-030718-104938.

*Advances in Neural Information Processing Systems 29*. http://arxiv.org/abs/1612.02780.

*Advances in Neural Information Processing Systems 29*, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 496–504. Curran Associates, Inc. http://papers.nips.cc/paper/6091-operator-variational-inference.pdf.

*Proceedings of ICML*. http://arxiv.org/abs/1401.4082.

*Annual Review of Statistics and Its Application*2 (1): 361–85. https://doi.org/10.1146/annurev-statistics-010814-020120.

*Proceedings of ICLR*. http://arxiv.org/abs/1611.04488.

*ICLR*. http://arxiv.org/abs/1701.03757.

*Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)*, 284–94. Minneapolis, Minnesota: Association for Computational Linguistics. https://doi.org/10.18653/v1/N19-1025.

*International Conference on Machine Learning*, 6850–60. http://arxiv.org/abs/1905.06723.

*Proceedings of the 18th International Society for Music Information Retrieval Conference (ISMIR’2017), Suzhou, China*. http://arxiv.org/abs/1703.10847.

*SIAM Journal on Scientific Computing*42 (1): A292–317. https://doi.org/10.1137/18M1225409.

*IEEE Transactions on Information Theory*66 (11): 7155–79. https://doi.org/10.1109/TIT.2020.2983698.

*Proceedings of European Conference on Computer Vision*. http://arxiv.org/abs/1609.03552.

## No comments yet!