Graphs of conditional, *directed* independence are a convenient formalism for many statistical models. If you have some kind of generating process for a model, often the most natural type of graphical model to express it with is a DAG.
These are also called Bayes nets (not to be confused with Bayesian inference.)

These can even be *causal* graphical models, and when we can infer those we are
extracting *Science* (ONO) from observational data.
See causal graphical models.

The laws of message passing inference assume their (MO) most complicated form for directed models; in practice it is frequently easier to convert a directed model to a factor graph for implementation. YMMV.

## Simpson’s paradox

Simpson’s paradox is an evergreen example of the importance of that causal graph. For a beautiful and clear example see Allen Downey’s Simpson’s Paradox and Age Effects. It is also the key explanation in Michael Nielson’s Reinventing Explanation.

## References

*arXiv:1511.08963 [Cs, Math, Stat]*, November.

*arXiv:1703.04025 [Cs, Stat]*, March.

*Journal of Machine Learning Research*16: 2273–2328.

*Proceedings of the National Academy of Sciences*106 (51): 21544–49.

*Conditional Specification of Statistical Models*. Springer Science & Business Media.

*AAAI*, 2410–16.

*arXiv:1507.03652 [Math, Stat]*, July.

*The Annals of Applied Statistics*9 (1): 247–74.

*Annual Review of Statistics and Its Application*1 (1): 255–78.

*Statistical Methods in Medical Research*22 (5): 466–92.

*IEEE Transactions on Knowledge and Data Engineering*8 (2): 195–210.

*New England Journal of Medicine*357 (4): 370–79.

*The Annals of Statistics*40 (1): 294–321.

*Annals of Mathematics and Artificial Intelligence*32 (1-4): 335–72.

*Journal of the Royal Statistical Society. Series B (Methodological)*41 (1): 1–31.

*The Annals of Statistics*8 (3): 598–617.

*Biometrika*, October, asr041.

*Statistical Modelling*15 (4): 301–25.

*Games for the superintelligent*. London: Muller.

*IEEE Transactions on Pattern Analysis and Machine Intelligence*27 (9): 1392–1416.

*arXiv:1403.2310 [Stat]*, March.

*Machine Learning*37 (2): 183–233.

*Learning in Graphical Models*. Cambridge, Mass.: MIT Press.

*The Handbook of Brain Theory and Neural Networks*, 490–96.

*Handbook of Neural Networks and Brain Theory*.

*Journal of Machine Learning Research*8 (May): 613–36.

*Probabilistic Graphical Models : Principles and Techniques*. Cambridge, MA: MIT Press.

*J. Artif. Int. Res.*35 (1): 557–91.

*Journal of the Royal Statistical Society. Series B (Methodological)*50 (2): 157–224.

*Graphical Models*. Oxford Statistical Science Series. Clarendon Press.

*arXiv Preprint arXiv:1307.5636*.

*Journal of Machine Learning Research*7 (October): 2031–64.

*Proceedings of the National Academy of Sciences*107 (14): 6286–91.

*Proceedings of the 24th International Conference on Machine Learning*, 625–32. ACM.

*Machine learning: a probabilistic perspective*. 1 edition. Adaptive computation and machine learning series. Cambridge, MA: MIT Press.

*Learning Bayesian Networks*. Vol. 38. Prentice Hal, Paperback.

*Proceedings of the Second AAAI Conference on Artificial Intelligence*, 133–36. AAAI’82. Pittsburgh, Pennsylvania: AAAI Press.

*Artificial Intelligence*29 (3): 241–88.

*Probabilistic reasoning in intelligent systems: networks of plausible inference*. Rev. 2. print., 12. [Dr.]. The Morgan Kaufmann series in representation and reasoning. San Francisco, Calif: Kaufmann.

*Kybernetika*25 (7): 33–44.

*Progress in Neurobiology*77 (1-2): 1–37.

*IEEE Transactions on Information Theory*54 (9): 4053–68.

*Neural Computation*29 (5): 1151–1203.

*ICML 2012*.

*Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence*, 480–87. UAI’98. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.

*arXiv:1607.06565 [Physics, Stat]*, July.

*Proceedings of the Conference on Empirical Methods in Natural Language Processing*, 145–56. Association for Computational Linguistics.

*Causation, Prediction, and Search*. Second Edition. Adaptive Computation and Machine Learning. The MIT Press.

*Learning in Graphical Models*, 261–97. Cambridge, Mass.: MIT Press.

*Phys. Rev. E*85 (6): 065201.

*arXiv:1508.00280 [Cs]*, August.

*arXiv:1407.2483 [Cs, Stat]*, July.

*Graphical Models, Exponential Families, and Variational Inference*. Vol. 1. Foundations and Trends® in Machine Learning. Now Publishers.

*Neural Computation*12 (1): 1–41.

*Neural Computation*13 (10): 2173–2200.

*Journal of Machine Learning Research*, 661–94.

*The Annals of Mathematical Statistics*5 (3): 161–215.

*Exploring Artificial Intelligence in the New Millennium*, edited by G. Lakemeyer and B. Nebel, 239–36. Morgan Kaufmann Publishers.

*arXiv:1202.3775 [Cs, Stat]*, February.

## No comments yet. Why not leave one?