**tl;dr**:These are the notes from a reading group I lead in 2016 on causal DAGs;
When I have time to expand these notes into complete sentences,
I will migrate the good bits to an expanded and improved
notebook on causal DAGS.
For now see the updated and fixed version of this.

We follow Pearl’s summary (Pearl (2009a)). (approx sections 1-3 of the Pearl paper.)

In particular, I want to get to the identification of causal effects given an existing causal DAG from observational data with unobserved covariates via criteria such as the back-door criterion We’ll see.

Approach: casual, motivate Pearl’s pronouncements, without deriving everything from axioms. Not statistical; will not answer the question of how we infer graph structure from data. Will skip many complexities by taking several slightly over-restrictive conditions, which we would relax if we were not doing this in 1 hour.

Not covered: UGs, PDAGs…

Assumptions: No-one here is an expert in this DAG graphical formalism for causal inference.

## Motivational examples

- Wet pavements
- Obesity contagion
- Nobel prizes and chocolate
- Simpson’s paradox
- etc

## Machinery

We are interested in representing influence between variables in a non-parametric fashion.

Our main tool to do this will be conditional independence DAGs, and causal use of these. Alternative name: “Bayesian Belief networks”. (Overloads “Bayesian”, so not used here)

### DAGs

DAG: Directed (probabilistic) graphical model. Graph defined, as usual, defined by a set of vertexes and edges.

\[ \mathcal{G}=(\mathbf{V},E) \]

We show the directs of edges by writing them as arrows.

For nodes \(X,Y\in V\) we write (XY) to mean there is a directed edged joining them.

Familiar from, e.g., Structural equation models, hierarchical models, expert systems. General graph theory…

A graph with *directed* edges, and no cycles.
(you cannot return to the same starting node traveling only *forward* along the arrows)

We need some terminology.

- Parents
- The parents of a node \(X\) in a graph are all nodes joined to it by an incoming arrow, \(\operatorname{parents}(X)=\{Y\in V:Y\rightarrow X\}.\)
- Children
- similarly, \(\operatorname{parents}(X)=\{Y\in V:X\rightarrow Y\}.\)
- Co-parent
- \(\operatorname{coparents}(X)=\{Y\in V:\exists Z\in V \text{ s.t. } X\rightarrow Z\text{ and }Y\rightarrow Z\}.\)

*Ancestors* and *descendants* should be clear as well.
For convenience, we define \(X\in\operatorname{parents}(X)\)

### Random variables

I will deal with finite collections of random variables \(\mathbf{V}\).

For simplicity of exposition, each of the RVs will be supported on \(\mathcal{X}_i\subset\mathbb{Z}\), so that we may work with pmfs, and write \(p(X_i|X_j)\) for the pmf. I may write \(p(x_i|x_j)\) to mean \(p(X_i=x_i|X_j=x_j)\).

Also we are working with *sets of random variables* rather than
*sets of events* and the discrete state space
reduces the need to discuss sets of events.

Extension to continuous RVs, or arbitrary RVs is trivial for everything I discuss here. (A challenge is if the probabilities are not all strictly positive.)

Motivation in terms of structural models.

\[\begin{aligned} X_6 &= f_6(X_4, X_3, \varepsilon_6) \\ X_5 &= f_5(X_4, X_3, \varepsilon_5) \\ X_4 &= f_4(X_3, X_2, X_1, \varepsilon_4) \\ X_3 &= f_3(\varepsilon_3) \\ X_2 &= f_2(X_1, \varepsilon_2) \\ X_1 &= f_3(\varepsilon_1) \\ \end{aligned}\]

Without further information about the forms of \(f_i\) or \(\varepsilon_i\), our assumptions have constrained our conditional independence relations to permit a particular factorization of the mass function:

\[ p(x_6, x_5, x_4, x_3, x_2, x_1) = p(x_1) p(x_2|x_1) p(x_3) p(x_4|x_1, x_2, x_3) p(x_5|x_3,x_4) p(x_6|x_3,x_4) \]

We are “nonparametric” in the sense that working with this conditional factorization does not require any further parametric assumptions on the model.

However, we would like to proceed from this factorization to conditional independence, which is non-trivial. Specifically, we would like to know which variables are conditionally independent of others, given such an (assumed) factorization.

More notation: We write

\[ X \perp Y|Z \]

for \(X\) independent of \(Y\) given \(Z\).

We also use this notation for sets of random variables, and will bold them when it is necessary to emphasis this.

\[ \mathbf{X} \perp \mathbf{Y}|\mathbf{Z} \]

Questions:

- \(X_2\perp X_3\)?
- \(X_2\perp X_3|X_1\)?
- \(X_2\perp X_3|X_4\)?

However, this product notation is not illuminating; we use a graph formalism. That’s where the DAGs come in.

This will proceed in 3 steps

The graphs will describe

*conditional factorization relations*.We do some work to construct from these relations some

*conditional independence relations*, which may be read off the graph.From these relations plus a causal interpretation we derive rules for

*identification of causal relations*If we get further than that, it will be all about coffee

Anyway, a joint distribution \(p(\mathbb{X})\) decomposes according to a directed graph \(G\) if we may factor it

\[ p(X_1,X_2,\dots,X_v)=\prod_{X=1}^v p(X_i|\operatorname{parents}(X_i)) \]

Uniqueness?

It would be tempting to suppose that a node is independent of its children given its parents or somesuch. But things are not quite so simple.

Questions:

- \(\text{Sprinkler}\perp \text{Rain}\)?
- \(\text{Sprinkler}\perp \text{Rain}|\text{Wet season}\)?
- \(\text{Sprinkler}\perp \text{Rain}|\text{Wet pavement}\)?
- \(\text{Sprinkler}\perp \text{Rain}|\text{Wet season}, \text{Wet pavement}\)?

To make precise statements about conditional independence relations we do more work.

We need new graph vocabulary *and* conditional independence vocabulary.

Axiomatic characterisation of conditional independence. (Pearl 2008; Steffen L. Lauritzen 1996).

**Theorem**: ((Pearl 2008))
For disjoint subsets \(\mathbf{W},\mathbf{X},\mathbf{Y},\mathbf{Z}\subseteq\mathbf{V}.\)

Then the relation \(\cdot\perp\cdot|\cdot\) satisfies the following relations:

\[\begin{aligned} \mathbf{X} \perp \mathbf{Z} |\mathbf{Y} & \Leftrightarrow & \mathbf{Z}\perp \mathbf{X} | \mathbf{Y} && \text{ Symmetry }&\\ \mathbf{X} \perp \mathbf{Y}\cup \mathbf{W} |\mathbf{Z} & \Rightarrow & \mathbf{X} \perp \mathbf{Y} \text{ and } \mathbf{X} \perp \mathbf{W} && \text{ Decomposition }&\\ \mathbf{X} \perp \mathbf{Y}\cup \mathbf{W} |\mathbf{Z} & \Rightarrow & \mathbf{X} \perp \mathbf{Y}|\mathbf{Z}\cup\mathbf{W} && \text{ Weak Union }&\\ \mathbf{X} \perp \mathbf{Y} |\mathbf{Z} \text{ and } \mathbf{X} \perp \mathbf{W}|\mathbf{Z}\cup \mathbf{Y} |\mathbf{Z} & \Rightarrow & \mathbf{X} \perp \mathbf{Y}\cup \mathbf{W}|\mathbf{Z} && \text{ Contraction }&\\ \mathbf{X} \perp \mathbf{Y} |\mathbf{Z}\cup \mathbf{W} \text{ and } \mathbf{X} \perp \mathbf{W} |\mathbf{Z}\cup \mathbf{Y} & \Rightarrow & \mathbf{X}\perp \mathbf{W}\cup\mathbf{Y} | \mathbf{Z} && \text{ Intersection } & (*)\\ \end{aligned}\]

(*) The Intersection axiom only holds for strictly positive distributions.

How can we relate this to the topology of the graph?

The flow of conditional information does not correspond exactly to the marginal factorization, but it relates. (mention UG connections?)

**Definition**:
A set \(\mathbf{S}\) *blocks* a path \(\pi\) from X to Y in a DAG \(\mathcal{G}\) if either

There a node \(a\in\pi\) which

*is not*a collider on \(\pi\) such that \(a\in\mathbf{S}\)There a node \(b\in\pi\) which

*is*a collider on \(\pi\) and \(\operatorname{descendants}(b)\cap\mathbf{S}=\emptyset\)

If a path is not blocked, it is *active*.

**Definition**:
A set \(\mathbf{S}\) *d-separates* two subsets of nodes
\(\mathbf{X},\mathbf{X}\subseteq\mathcal{G}\)
if it blocks *every* path between any every pair of nodes \((A,B)\)
such that \(A\in\mathbf{X},\, B\in\mathbf{Y}.\)

This looks ghastly and unintuitive, but we have to live with it because it is the shortest path to making simple statements about conditional independence DAGs without horrible circumlocutions, or starting from undirected graphs, which is tedious.

**Theorem**: (Pearl 2008; Steffen L. Lauritzen 1996).
If the joint distribution of \(\mathbf{V}\) factorises according to the DAG
\(\mathbf{G}\) then for two subsets of variables
\(\mathbf{X}\perp\mathbf{Y}|\mathbf{S}\) iff \(\mathbf{S}\) *d*-separates \(\mathbf{X}\) and \(\mathbf{Y}\).

This puts us in a position to make non-awful, more intuitive statements about the conditional independence relationships that we may read off the DAG.

**Corollary**:
The DAG Markov property.

\[ X \perp \operatorname{descendants}(X)^C|\operatorname{parents}(X) \]

**Corollary**:
The DAG Markov blanket.

Define

\[ \operatorname{blanket}(X):= \operatorname{parents}(X)\cup \operatorname{children}(X)\cup \operatorname{coparents}(X) \]

Then

\[ X\perp \operatorname{blanket}(X)^C|\operatorname{blanket}(X) \]

## Causal interpretation

Finally!

We have a DAG \(\mathcal{G}\) and a set of variables \(\mathbf{V}\) to which we wish to give a causal interpretation.

Assume

- The \(\mathbf{V}\) factors according to \(\mathcal{G}\)
- \(X\rightarrow Y\) means “causes” (The Causal Markov property)
- We additionally assume
*faithfulness*, that is, that \(X\leftrightsquigarrow Y\) iff there is a path connecting them.

So, are we done? Only if correlation equals causation.

We add the additional condition that

- all the relevant variables are included in the graph. (We coyly avoid making this precise)

The BBC raised one possible confounding variable:

[…] Eric Cornell, who won the Nobel Prize in Physics in 2001, told Reuters “I attribute essentially all my success to the very large amount of chocolate that I consume. Personally I feel that milk chocolate makes you stupid… dark chocolate is the way to go. It’s one thing if you want a medicine or chemistry Nobel Prize but if you want a physics Nobel Prize it pretty much has got to be dark chocolate.”

Finally, we need to discuss the relationship between conditional dependence and causal effect. This is the difference between, say,

\[ P(\text{Wet pavement}|\text{Sprinkler}=on) \]

and

\[ P(\text{Wet pavement}|\operatorname{do}(\text{Sprinkler}=on)) \]

Called “truncated factorization” in the paper. \(\text{do}\)-calculus and graph surgery.

If we know \(P\), this is relatively easy. Marginalize out all influences to the causal variable of interest, which we show graphically as wiping out a link.

Now suppose we are not given complete knowledge of \(P\), but only *some* of the conditional distributions. (there are *unobservable variables*).
This is the setup of observational studies and epidemiology and so on.

What variables *must* we know the conditional distributions of in order to know the conditional effect? That is, we call a set of covariates \(\mathbf{S}\) an *admissible set* (or *sufficient set*)
with respect to identifying the effect of \(X\) on \(Y\) iff

\[ p(Y=y|do(X=x))=\sum_{\mathbf{s}} P(Y=y|X=x,\mathbf{S}=\mathbf{s}) P(\mathbf{S}=\mathbf{s}) \]

**Criterion 1**:
The parents of a cause are an admissible set (Pearl 2009a).

**Criterion 2**:
The back door criterion.

A set \(\mathbf{S}\) such that

\(\mathbf{S}\cap\operatorname{descendants}(X)=\emptyset\)

\(\mathbf{S}\) blocks all paths which start with an arrow

*into*\(\mathbf{X}\)

This is a sufficient condition.

Causal properties of sufficient sets:

\[ P(Y=y|\operatorname{do}(X=x),S=s)=P(Y=y|X=x,S=s) \]

Hence

\[ P(Y=y|\operatorname{do}(X=x),S=s)=\sum_sP(Y=y|X=x,S=s)P(S=s) \]

## Examples

- \(i,j\) are individuals,
- \(Z\) denote observed traits,
- \(X\) denote latent traits
- \(Y\) denote observed outcomes
- \(A\) is a network tie

\(X_i\) d-separates \(Y_i(t)\) from \(A_{ij}\). Since \(X_i\) is latent and unobserved, \(Y_i(t) \leftarrow X_i \rightarrow A_{ij}\) is a confounding path from \(Y_i(t)\) to \(A_{ij}\). Likewise \(Y_j(t-1)\leftarrow X_j \rightarrow A_{ij}\) is a confounding path from \(Yi(t-1)\) to \(A_{ij}\). Thus, \(Y_i(t)) and \(Y_i(t-1)\) are

d-connected when conditioning on all the observed (boxed) variables […] . Hence the direct effect of \(Y_i(t)\) on \(Y_i(t-1)\) is not identifiable

## Recommended reading

People recommend me Koller and Friedman, which includes many different flavours of DAG model and many different methods, (Koller and Friedman 2009) but it didn’t suit me, being somehow too detailed and too non-specific at the same time.

Spirtes et al (Spirtes, Glymour, and Scheines 2001) and Pearl (Pearl 2009a) are readable. See also Pearl’s edited highlights (Pearl 2009b). Lauritzen ((Steffen L. Lauritzen 1996)) is clear but the details of the constructions are long and detailed and more general than here. (partially directed graphs.)

Lauritzen’s shorter introduction (Steffen L. Lauritzen 2000) is nice if you can get it; Not overwhelming, starts with a slightly more general formalism (DAGs as a special case of PDAGs, moral graphs everywhere). Murphy’s textbook (Murphy 2012) has a minimal introduction intermingled with some related models, with a more ML, “expert systems”-flavoured and more Bayesian formalism.

## References

*Proceedings of the National Academy of Sciences*106 (51): 21544–49.

*Conditional Specification of Statistical Models*. Springer Science & Business Media.

*Proceedings of the National Academy of Sciences*113 (27): 7345–52.

*AAAI*, 2410–16.

*arXiv:1507.03652 [Math, Stat]*, July.

*The Annals of Applied Statistics*9 (1): 247–74.

*Mathematical Methods of Operations Research*77 (3): 357–70.

*Annual Review of Statistics and Its Application*1 (1): 255–78.

*Statistical Methods in Medical Research*22 (5): 466–92.

*arXiv:1411.1557 [Stat]*, November.

*The Annals of Statistics*40 (1): 294–321.

*Biometrika*, October, asr041.

*Handbook of Causal Analysis for Social Research*, edited by Stephen L. Morgan, 245–73. Handbooks of Sociology and Social Research. Dordrecht: Springer Netherlands.

*arXiv:1405.1868 [Stat]*, May.

*American Journal of Sociology*117 (3): 955–66.

*Proceedings of the 10th International Workshop on Artificial Intelligence and Statistics*, 128–35. Citeseer.

*Learning in Graphical Models*. Cambridge, Mass.: MIT Press.

*The Handbook of Brain Theory and Neural Networks*, 490–96.

*Handbook of Neural Networks and Brain Theory*.

*Journal of Machine Learning Research*8 (May): 613–36.

*arXiv Preprint arXiv:1510.04740*.

*IJCAI*, 83:190–93. San Francisco, CA, USA: Citeseer.

*Probabilistic Graphical Models : Principles and Techniques*. Cambridge, MA: MIT Press.

*Journal of the Royal Statistical Society. Series B (Methodological)*50 (2): 157–224.

*Graphical Models*. Oxford Statistical Science Series. Clarendon Press.

*Complex Stochastic Systems*, 63–107. CRC Press.

*arXiv Preprint arXiv:1307.5636*.

*Nature Methods*7 (4): 247–48.

*The Annals of Statistics*37 (6A): 3133–64.

*Proceedings of the National Academy of Sciences*107 (14): 6286–91.

*New England Journal of Medicine*367 (16): 1562–64.

*Proceedings of the 24th International Conference on Machine Learning*, 625–32. ACM.

*Machine learning: a probabilistic perspective*. 1 edition. Adaptive computation and machine learning series. Cambridge, MA: MIT Press.

*Learning Bayesian Networks*. Vol. 38. Prentice Hal, Paperback.

*Social Networks*33 (3): 211–18.

*Proceedings of the Second AAAI Conference on Artificial Intelligence*, 133–36. AAAI’82. Pittsburgh, Pennsylvania: AAAI Press.

*Artificial Intelligence*29 (3): 241–88.

*Probabilistic reasoning in intelligent systems: networks of plausible inference*. Rev. 2. print., 12. [Dr.]. The Morgan Kaufmann series in representation and reasoning. San Francisco, Calif: Kaufmann.

*Statistics Surveys*3: 96–146.

*Causality: Models, Reasoning and Inference*. Cambridge University Press.

*arXiv:1501.01332 [Stat]*, January.

*2011 49th Annual Allerton Conference on Communication, Control, and Computing (Allerton)*, 958–65.

*Statistical Science*21 (2): 206–22.

*Use of Directed Acyclic Graphs*. Agency for Healthcare Research and Quality (US).

*arXiv:1607.06565 [Physics, Stat]*, July.

*Sociological Methods & Research*40 (2): 211–39.

*The Journal of Machine Learning Research*9: 1941–79.

*arXiv:1411.2127 [Stat]*, November.

*Proceedings of the Conference on Empirical Methods in Natural Language Processing*, 145–56. Association for Computational Linguistics.

*Causation, Prediction, and Search*. Second Edition. Adaptive Computation and Machine Learning. The MIT Press.

*Statistical Methods in Medical Research*21 (1): 7–30.

*The Annals of Mathematical Statistics*5 (3): 161–215.

*Exploring Artificial Intelligence in the New Millennium*, edited by G. Lakemeyer and B. Nebel, 239–36. Morgan Kaufmann Publishers.

*arXiv:1202.3775 [Cs, Stat]*, February.

## No comments yet. Why not leave one?