Free energy

Fancy analogy for understanding cognition

Not “free as in speech” or “free as in beer”, nor “free energy” in the sense of perpetual motion machines, zero point energy or pills that turn your water into petroleum, but rather a particular mathematical object that pops up in variational Bayes inference and in wacky theories of cognition.

In variational Bayes

Variational Bayes inference is a formalism for learning, borrowing bits from statistical mechanics and graphical models.

Free energy shows up in variational Bayes as the negative of the ELBO, the evidence lower bound, which AFAICT means that this term is defined by the Kullback-Leibler divergence. Presumably an analogous term would pop up in non KL approximation.

As a model for cognition

Recycling this metaphor for the method by which the brain learns, the “free energy principle” is a part of the theory of predictive processing, a model of the mind as learning process.


Bengio, Yoshua. 2009. Learning Deep Architectures for AI. Vol. 2.
Castellani, Tommaso, and Andrea Cavagna. 2005. “Spin-Glass Theory for Pedestrians.” Journal of Statistical Mechanics: Theory and Experiment 2005 (05): P05012.
Frey, B. J., and Nebojsa Jojic. 2005. “A Comparison of Algorithms for Inference and Learning in Probabilistic Graphical Models.” IEEE Transactions on Pattern Analysis and Machine Intelligence 27 (9): 1392–1416.
Friston, Karl. 2010. “The Free-Energy Principle: A Unified Brain Theory?” Nature Reviews Neuroscience 11 (2): 127.
———. 2013. “Life as We Know It.” Journal of The Royal Society Interface 10 (86).
Geirhos, Robert, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A. Wichmann. 2020. “Shortcut Learning in Deep Neural Networks.” April 16, 2020.
Jordan, Michael I., Zoubin Ghahramani, Tommi S. Jaakkola, and Lawrence K. Saul. 1999. “An Introduction to Variational Methods for Graphical Models.” Machine Learning 37 (2): 183–233.
Jordan, Michael I., and Yair Weiss. 2002. “Probabilistic Inference in Graphical Models.” Handbook of Neural Networks and Brain Theory.
LeCun, Yann, Sumit Chopra, Raia Hadsell, M. Ranzato, and F. Huang. 2006. “A Tutorial on Energy-Based Learning.” Predicting Structured Data.
Millidge, Beren, Alexander Tschantz, and Christopher L. Buckley. 2020. “Whence the Expected Free Energy?” September 28, 2020.
Montanari, Andrea. 2011. “Lecture Notes for Stat 375 Inference in Graphical Models.”
Wainwright, Martin J., and Michael I. Jordan. 2008. Graphical Models, Exponential Families, and Variational Inference. Vol. 1. Foundations and Trends® in Machine Learning. Now Publishers.
Wainwright, Martin, and Michael I Jordan. 2005. “A Variational Principle for Graphical Models.” In New Directions in Statistical Signal Processing. Vol. 155. MIT Press.
Wang, Chaohui, Nikos Komodakis, and Nikos Paragios. 2013. “Markov Random Field Modeling, Inference & Learning in Computer Vision & Image Understanding: A Survey.” Computer Vision and Image Understanding 117 (11): 1610–27.
Williams, Daniel. 2020. “Predictive Coding and Thought.” Synthese 197 (4): 1749–75.
Xing, Eric P., Michael I. Jordan, and Stuart Russell. 2003. “A Generalized Mean Field Algorithm for Variational Inference in Exponential Families.” In Proceedings of the Nineteenth Conference on Uncertainty in Artificial Intelligence, 583–91. UAI’03. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.
Yedidia, J. S., W. T. Freeman, and Y. Weiss. 2003. “Understanding Belief Propagation and Its Generalizations.” In Exploring Artificial Intelligence in the New Millennium, edited by G. Lakemeyer and B. Nebel, 239–36. Morgan Kaufmann Publishers.
Yedidia, Jonathan S., W. T. Freeman, and Y. Weiss. 2005. “Constructing Free-Energy Approximations and Generalized Belief Propagation Algorithms.” IEEE Transactions on Information Theory 51 (7): 2282–312.

Warning! Experimental comments system! If is does not work for you, let me know via the contact form.

No comments yet!

GitHub-flavored Markdown & a sane subset of HTML is supported.