# Directed graphical models

September 20, 2017 — May 13, 2020

Graphs of conditional, *directed* independence are a convenient formalism for many statistical models. If you have some kind of generating process for a model, often the most natural type of graphical model to express it with is a DAG. These are also called Bayes nets (not to be confused with Bayesian inference.)

These can even be *causal* graphical models, and when we can infer those we are extracting *Science* (ONO) from observational data. See causal graphical models.

The laws of message passing inference assume their (MO) most complicated form for directed models; in practice it is frequently easier to convert a directed model to a factor graph for implementation. YMMV.

## 1 Simpson’s paradox

Simpson’s paradox is an evergreen example of the importance of that causal graph. For a beautiful and clear example see Allen Downey’s Simpson’s Paradox and Age Effects. It is also the key explanation in Michael Nielson’s Reinventing Explanation.

## 2 Tools

BayesNets is a Julia package for reasoning over directed graphical models.

## 3 References

*arXiv:1511.08963 [Cs, Math, Stat]*.

*arXiv:1703.04025 [Cs, Stat]*.

*Journal of Machine Learning Research*.

*Proceedings of the National Academy of Sciences*.

*Conditional Specification of Statistical Models*.

*AAAI*.

*arXiv:1507.03652 [Math, Stat]*.

*The Annals of Applied Statistics*.

*Annual Review of Statistics and Its Application*.

*Statistical Methods in Medical Research*.

*IEEE Transactions on Knowledge and Data Engineering*.

*New England Journal of Medicine*.

*The Annals of Statistics*.

*Journal of the Royal Statistical Society. Series B (Methodological)*.

*The Annals of Statistics*.

*Annals of Mathematics and Artificial Intelligence*.

*Biometrika*.

*Statistical Modelling*.

*Games for the superintelligent*.

*IEEE Transactions on Pattern Analysis and Machine Intelligence*.

*arXiv:1403.2310 [Stat]*.

*Learning in Graphical Models*.

*Machine Learning*.

*The Handbook of Brain Theory and Neural Networks*.

*Handbook of Neural Networks and Brain Theory*.

*Journal of Machine Learning Research*.

*Probabilistic Graphical Models : Principles and Techniques*.

*J. Artif. Int. Res.*

*Graphical Models*. Oxford Statistical Science Series.

*Journal of the Royal Statistical Society. Series B (Methodological)*.

*arXiv Preprint arXiv:1307.5636*.

*Journal of Machine Learning Research*.

*Proceedings of the National Academy of Sciences*.

*Proceedings of the 24th International Conference on Machine Learning*.

*Machine learning: a probabilistic perspective*. Adaptive computation and machine learning series.

*Learning Bayesian Networks*.

*Proceedings of the Second AAAI Conference on Artificial Intelligence*. AAAI’82.

*Artificial Intelligence*.

*Probabilistic reasoning in intelligent systems: networks of plausible inference*. The Morgan Kaufmann series in representation and reasoning.

*Kybernetika*.

*Progress in Neurobiology*.

*IEEE Transactions on Information Theory*.

*Neural Computation*.

*ICML 2012*.

*Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence*. UAI’98.

*arXiv:1607.06565 [Physics, Stat]*.

*Proceedings of the Conference on Empirical Methods in Natural Language Processing*.

*Causation, Prediction, and Search*. Adaptive Computation and Machine Learning.

*Learning in Graphical Models*.

*Phys. Rev. E*.

*arXiv:1508.00280 [Cs]*.

*arXiv:1407.2483 [Cs, Stat]*.

*Graphical Models, Exponential Families, and Variational Inference*. Foundations and Trends® in Machine Learning.

*Neural Computation*.

*Neural Computation*.

*Journal of Machine Learning Research*.

*The Annals of Mathematical Statistics*.

*Exploring Artificial Intelligence in the New Millennium*.

*arXiv:1202.3775 [Cs, Stat]*.