Placeholder for my notes on probabilistic graphical models. In general graphical models are a particular type of way of handling multivariate data based on working out what is conditionally independent of what else.
Thematically, this is scattered across graphical models in inference, learning graphs from data, learning causation from data plus graphs, quantum graphical models because it all looks a bit different with noncommutative probability.
See also diagramming graphical models.
Graphs of conditional, directed independence are a convenient formalism for many models. These are also called Bayes nets (not to be confused with Bayesian inference.)
Undirected, a.k.a. Markov graphs
a.k.a Markov random fields, Markov random networks. (other types?)
A unifying formalism for the directed and undirected graphical models. I have not really used these. See factor graphs.
Pedagogically useful, although probably not industrial-grade, David Barber’s discrete graphical model code (Julia) can do queries over graphical models.
All of the probabilistic programming languages end up needing to accound for graphical model structure in practice.