Probabilistic graphical models

Cleaving reality at the joint



(Barber 2012):

Taxonomy of graphical models

Placeholder for my notes on probabilistic graphical models. In general graphical models are a particular type of way of handling multivariate data based on working out what is conditionally independent of what else.

Thematically, this is scattered across graphical models in inference, learning graphs from data, learning causation from data plus graphs, quantum graphical models because it all looks a bit different with noncommutative probability.

See also diagramming graphical models.

Directed graphs

Graphs of conditional, directed independence are a convenient formalism for many models. These are also called Bayes nets (not to be confused with Bayesian inference.)

See directed graphical models.

Undirected, a.k.a. Markov graphs

a.k.a Markov random fields, Markov random networks. (other types?)

See undirected graphical models.

Factor graphs

A unifying formalism for the directed and undirected graphical models. I have not really used these. See factor graphs.

Implementations

Pedagogically useful, although probably not industrial-grade, David Barber’s discrete graphical model code (Julia) can do queries over graphical models.

All of the probabilistic programming languages end up needing to accound for graphical model structure in practice.

References

Barber, David. 2012. Bayesian Reasoning and Machine Learning. Cambridge ; New York: Cambridge University Press.
Bishop, Christopher M. 2006. Pattern Recognition and Machine Learning. Information Science and Statistics. New York: Springer.
Buntine, W. L. 1994. β€œOperations for Learning with Graphical Models.” Journal of Artificial Intelligence Research 2 (1): 159–225.
Charniak, Eugene. 1991. β€œBayesian Networks Without Tears.” AI Magazine 12 (4): 50.
Da Costa, Lancelot, Karl Friston, Conor Heins, and Grigorios A. Pavliotis. 2021. β€œBayesian Mechanics for Stationary Processes.” arXiv:2106.13830 [Math-Ph, Physics:nlin, q-Bio], June.
Dawid, A. Philip. 1979. β€œConditional Independence in Statistical Theory.” Journal of the Royal Statistical Society. Series B (Methodological) 41 (1): 1–31.
β€”β€”β€”. 1980. β€œConditional Independence for Statistical Operations.” The Annals of Statistics 8 (3): 598–617.
Jordan, Michael Irwin. 1999. Learning in Graphical Models. Cambridge, Mass.: MIT Press.
Koller, Daphne, and Nir Friedman. 2009. Probabilistic Graphical Models : Principles and Techniques. Cambridge, MA: MIT Press.
Lauritzen, Steffen L. 1996. Graphical Models. Oxford Statistical Science Series. Clarendon Press.
Levine, Sergey. 2018. β€œReinforcement Learning and Control as Probabilistic Inference: Tutorial and Review.” arXiv:1805.00909 [Cs, Stat], May.
Montanari, Andrea. 2011. β€œLecture Notes for Stat 375 Inference in Graphical Models.”
Murphy, Kevin P. 2012. Machine learning: a probabilistic perspective. 1 edition. Adaptive computation and machine learning series. Cambridge, MA: MIT Press.
Obermeyer, Fritz, Eli Bingham, Martin Jankowiak, Du Phan, and Jonathan P. Chen. 2020. β€œFunctional Tensors for Probabilistic Programming.” arXiv:1910.10775 [Cs, Stat], March.
Pearl, Judea. 2008. Probabilistic reasoning in intelligent systems: networks of plausible inference. Rev.Β 2. print., 12. [Dr.]. The Morgan Kaufmann series in representation and reasoning. San Francisco, Calif: Kaufmann.
β€”β€”β€”. 2009. Causality: Models, Reasoning and Inference. Cambridge University Press.
Pearl, Judea, Dan Geiger, and Thomas Verma. 1989. β€œConditional Independence and Its Representations.” Kybernetika 25 (7): 33–44.
Sadeghi, Kayvan. 2020. β€œOn Finite Exchangeability and Conditional Independence.” Electronic Journal of Statistics 14 (2): 2773–97.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.