Applying a causal graph structure in the challenging environment of a no-holds-barred nonparametric machine learning algorithm such as a neural net or its ilk. I am interested in this because it seems necessary and kind of obvious for handling things like dataset shift, but is often ignored. What is that about?

I do not know at the moment. This is a link salad for now.

Léon Bottou, From Causal Graphs to Causal Invariance:

For many problems, it’s difficult to even attempt drawing a causal graph. While structural causal models provide a complete framework for causal inference, it is often hard to encode known physical laws (such as Newton’s gravitation, or the ideal gas law) as causal graphs. In familiar machine learning territory, how does one model the causal relationships between individual pixels and a target prediction? This is one of the motivating questions behind the paper Invariant Risk Minimization (IRM). In place of structured graphs, the authors elevate invariance to the defining feature of causality.

He commends the Cloudera Fast Forward tutorial Causality for Machine Learning, which is a nice bit of applied work.

Nisha Muktewar and Chris Wallace, Causality for Machine Learning is the book Bottou recommends on this theme.

For coders, Ben Dickson writes on Why machine learning struggles with causality.

Cheng Soon Ong recommends Finn Lattimore to me as the latest and greatest thing on this theme.

See also biomedia-mira/deepscm: Repository for Deep Structural Causal Models for Tractable Counterfactual Inference (Pawlowski, Castro, and Glocker 2020).

There is a fun body of work by what is in my mind the Central European causality-ML thinktank which includes various interesting people: Bernhard Schölkopf, Jonas Peters, Joris Mooij, Stephan Bongers and Dominik Janzing Eetc. I would love to understand everything that is going on . Perhaps I should start with the book (Peters, Janzing, and Schölkopf 2017) (Free PDF), or the chatty casual introduction (Schölkopf 2019).

For a good explanation of what they are about by example, see Bernhard Schölkopf: Causality and Exoplanets.

I am particularly curious about their work in causality in continuous fields, e.g. Bongers et al. (2020); Bongers and Mooij (2018); Bongers et al. (2016); Rubenstein et al. (2018).

## Double learning

Künzel et al. (2019) (HT Mike McKenna) looks interesting - it is a generic intervention estimator for ML methods.

… We describe a number of metaalgorithms that can take advantage of any supervised learning or regression method in machine learning and statistics to estimate the conditional average treatment effect (CATE) function. Metaalgorithms build on base algorithms—such as random forests (RFs), Bayesian additive regression trees (BARTs), or neural networks—to estimate the CATE, a function that the base algorithms are not designed to estimate directly. We introduce a metaalgorithm, the X-learner, that is provably efficient when the number of units in one treatment group is much larger than in the other and can exploit structural properties of the CATE function. For example, if the CATE function is linear and the response functions in treatment and control are Lipschitz-continuous, the X-learner can still achieve the parametric rate under regularity conditions. We then introduce versions of the X-learner that use RF and BART as base learners. In extensive simulation studies, the X-learner performs favorably, although none of the metalearners is uniformly the best. In two persuasion field experiments from political science, we demonstrate how our X-learner can be used to target treatment regimes and to shed light on underlying mechanisms.

See also Mishler and Kennedy (2021). Maybe related Shalit, Johansson, and Sontag (2017); Shi, Blei, and Veitch (2019).

## References

*arXiv:1907.02893 [Cs, Stat]*, March.

*arXiv:1902.07409 [Stat]*, February.

*arXiv:1812.03253 [Cs, Stat]*.

*arXiv:1611.06221 [Cs, Stat]*, October.

*arXiv:1803.08784 [Cs, Stat]*, March.

*arXiv:1611.06221 [Cs, Stat]*, November.

*arXiv:2104.04103 [Cs, Stat]*, September.

*arXiv:2009.09070 [Cs]*, September.

*arXiv:1909.10893 [Cs, Stat]*, November.

*Advances in Neural Information Processing Systems 29*, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 2946–54. Curran Associates, Inc.

*arXiv:1709.02023 [Cs, Math, Stat]*, September.

*Proceedings of the National Academy of Sciences*116 (10): 4156–65.

*arXiv:2006.07796 [Cs, Stat]*, July.

*arXiv:1811.12359 [Cs, Stat]*, June.

*Advances in Neural Information Processing Systems 30*, edited by I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, 6446–56. Curran Associates, Inc.

*arXiv:2102.12353 [Cs, Stat]*, June.

*arXiv:2109.00173 [Cs, Stat]*, August.

*arXiv:1412.3773 [Cs, Stat]*, December.

*arXiv:1910.08527 [Cs, Stat]*, February.

*Advances In Neural Information Processing Systems*.

*arXiv:2110.10819 [Cs]*, October.

*arXiv:2006.06485 [Cs, Stat]*, October.

*Elements of Causal Inference: Foundations and Learning Algorithms*. Adaptive Computation and Machine Learning Series. Cambridge, Massachuestts: The MIT Press.

*Proceedings of the 27th ACM International Conference on Information and Knowledge Management*, 1679–82. CIKM ’18. New York, NY, USA: Association for Computing Machinery.

*Journal of Machine Learning Research*21 (188): 1–86.

*Uncertainty in Artificial Intelligence*.

*arXiv:1911.10500 [Cs, Stat]*, December.

*Proceedings of the IEEE*109 (5): 612–34.

*arXiv:1606.03976 [Cs, Stat]*, May.

*Proceedings of the 33rd International Conference on Neural Information Processing Systems*, 2507–17. Red Hook, NY, USA: Curran Associates Inc.

*arXiv:2109.03795 [Cs, Stat]*, September.

*arXiv:2004.08697 [Cs, Stat]*, July.

*Advances in Neural Information Processing Systems*. Vol. 33.

*arXiv:2010.07684 [Cs]*, February.

## No comments yet. Why not leave one?