Energy based models

Inference with kinda-tractable un-normalized densities

I don’t actually know what fits under this heading, but it sounds like it is simply inference for undirected graphical models? Or is there something distinct going on here?

Descending the local energy gradient to a more probable configuration


Che, Tong, Ruixiang Zhang, Jascha Sohl-Dickstein, Hugo Larochelle, Liam Paull, Yuan Cao, and Yoshua Bengio. 2020. “Your GAN Is Secretly an Energy-Based Model and You Should Use Discriminator Driven Latent Sampling.” March 23, 2020.
Clifford, P. 1990. “Markov Random Fields in Statistics.” In Disorder in Physical Systems: A Volume in Honour of John Hammersley, edited by G. R. Grimmett and D. J. A. Welsh. Oxford England : New York: Oxford University Press.
Hinton, Geoffrey. 2010. “A Practical Guide to Training Restricted Boltzmann Machines.” In Neural Networks: Tricks of the Trade, 9:926. Lecture Notes in Computer Science 7700. Springer Berlin Heidelberg. hinton/absps/guideTR.pdf.
LeCun, Yann, Sumit Chopra, Raia Hadsell, M. Ranzato, and F. Huang. 2006. “A Tutorial on Energy-Based Learning.” In Predicting Structured Data.
Montavon, Grégoire, Klaus-Robert Müller, and Marco Cuturi. 2016. “Wasserstein Training of Restricted Boltzmann Machines.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 3711–19. Curran Associates, Inc.
Pollard, Dave. 2004. “Hammersley-Clifford Theorem for Markov Random Fields.”
Salakhutdinov, Ruslan. 2015. “Learning Deep Generative Models.” Annual Review of Statistics and Its Application 2 (1): 361–85.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.