Causal inference in highly parameterized ML



Applying a causal graph structure in the challenging environment of a no-holds-barred nonparametric machine learning algorithm such as a neural net or its ilk. I am interested in this because it seems necessary and kind of obvious for handling things like dataset shift, but is often ignored. What is that about?

I do not know at the moment. This is a link salad for now.

LĂ©on Bottou, From Causal Graphs to Causal Invariance:

For many problems, it’s difficult to even attempt drawing a causal graph. While structural causal models provide a complete framework for causal inference, it is often hard to encode known physical laws (such as Newton’s gravitation, or the ideal gas law) as causal graphs. In familiar machine learning territory, how does one model the causal relationships between individual pixels and a target prediction? This is one of the motivating questions behind the paper Invariant Risk Minimization (IRM). In place of structured graphs, the authors elevate invariance to the defining feature of causality.

Nisha Muktewar and Chris Wallace, Causality for Machine Learning is the book Bottou recommends on this theme.

For coders, Ben Dickson writes on Why machine learning struggles with causality.

KĂĽnzel et al. (2019) (HT Mike McKenna) looks interesting - it is a generic intervention estimator for ML methods.

… We describe a number of metaalgorithms that can take advantage of any supervised learning or regression method in machine learning and statistics to estimate the conditional average treatment effect (CATE) function. Metaalgorithms build on base algorithms—such as random forests (RFs), Bayesian additive regression trees (BARTs), or neural networks—to estimate the CATE, a function that the base algorithms are not designed to estimate directly. We introduce a metaalgorithm, the X-learner, that is provably efficient when the number of units in one treatment group is much larger than in the other and can exploit structural properties of the CATE function. For example, if the CATE function is linear and the response functions in treatment and control are Lipschitz-continuous, the X-learner can still achieve the parametric rate under regularity conditions. We then introduce versions of the X-learner that use RF and BART as base learners. In extensive simulation studies, the X-learner performs favorably, although none of the metalearners is uniformly the best. In two persuasion field experiments from political science, we demonstrate how our X-learner can be used to target treatment regimes and to shed light on underlying mechanisms.

There is a fun body of work by what is in my mind the Central European causality-ML thinktank which includes various interesting people: Bernhard Schölkopf, Jonas Peters, Joris Mooij, Stephan Bongers and Dominik Janzing Eetc. I would love to understand everything that is going on here. Perhaps I should start with the book (Peters, Janzing, and Schölkopf 2017) (Free PDF), or the chatty casual introduction (Schölkopf 2019).

For a good explanation of what they are about by example, see Bernhard Schölkopf: Causality and Exoplanets.

I am particularly curious about their work in causality in continuous fields, e.g. Bongers et al. (2020); Bongers and Mooij (2018); Bongers et al. (2016); Rubenstein et al. (2018).

References

Arjovsky, Martin, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. 2020. “Invariant Risk Minimization.” arXiv:1907.02893 [cs, Stat], March. http://arxiv.org/abs/1907.02893.
Besserve, Michel, Arash Mehrjou, Rémy Sun, and Bernhard Schölkopf. 2019. “Counterfactuals Uncover the Modular Structure of Deep Generative Models.” In arXiv:1812.03253 [cs, Stat]. http://arxiv.org/abs/1812.03253.
Bongers, Stephan, Patrick Forré, Jonas Peters, Bernhard Schölkopf, and Joris M. Mooij. 2020. “Foundations of Structural Causal Models with Cycles and Latent Variables.” arXiv:1611.06221 [cs, Stat], October. http://arxiv.org/abs/1611.06221.
Bongers, Stephan, and Joris M. Mooij. 2018. “From Random Differential Equations to Structural Causal Models: The Stochastic Case.” arXiv:1803.08784 [cs, Stat], March. http://arxiv.org/abs/1803.08784.
Bongers, Stephan, Jonas Peters, Bernhard Schölkopf, and Joris M. Mooij. 2016. “Structural Causal Models: Cycles, Marginalizations, Exogenous Reparametrizations and Reductions.” arXiv:1611.06221 [cs, Stat], November. http://arxiv.org/abs/1611.06221.
Friedrich, Sarah, Gerd Antes, Sigrid Behr, Harald Binder, Werner Brannath, Florian Dumpert, Katja Ickstadt, et al. 2020. “Is There a Role for Statistics in Artificial Intelligence?” arXiv:2009.09070 [cs], September. http://arxiv.org/abs/2009.09070.
Goyal, Anirudh, Alex Lamb, Jordan Hoffmann, Shagun Sodhani, Sergey Levine, Yoshua Bengio, and Bernhard Schölkopf. 2020. “Recurrent Independent Mechanisms.” arXiv:1909.10893 [cs, Stat], November. http://arxiv.org/abs/1909.10893.
Johnson, Matthew J, David K Duvenaud, Alex Wiltschko, Ryan P Adams, and Sandeep R Datta. 2016. “Composing Graphical Models with Neural Networks for Structured Representations and Fast Inference.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 2946–54. Curran Associates, Inc. http://papers.nips.cc/paper/6379-composing-graphical-models-with-neural-networks-for-structured-representations-and-fast-inference.pdf.
Kocaoglu, Murat, Christopher Snyder, Alexandros G. Dimakis, and Sriram Vishwanath. 2017. “CausalGAN: Learning Causal Implicit Generative Models with Adversarial Training.” arXiv:1709.02023 [cs, Math, Stat], September. http://arxiv.org/abs/1709.02023.
Künzel, Sören R., Jasjeet S. Sekhon, Peter J. Bickel, and Bin Yu. 2019. “Metalearners for Estimating Heterogeneous Treatment Effects Using Machine Learning.” Proceedings of the National Academy of Sciences 116 (10): 4156–65. https://doi.org/10.1073/pnas.1804597116.
Lattimore, Finnian Rachel. 2017. “Learning How to Act: Making Good Decisions with Machine Learning.” https://doi.org/10.25911/5d67b766194ec.
Leeb, Felix, Guilia Lanzillotta, Yashas Annadani, Michel Besserve, Stefan Bauer, and Bernhard Schölkopf. 2021. “Structure by Architecture: Disentangled Representations Without Regularization.” arXiv:2006.07796 [cs, Stat], July. http://arxiv.org/abs/2006.07796.
Locatello, Francesco, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, and Olivier Bachem. 2019. “Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations.” arXiv:1811.12359 [cs, Stat], June. http://arxiv.org/abs/1811.12359.
Louizos, Christos, Uri Shalit, Joris M Mooij, David Sontag, Richard Zemel, and Max Welling. 2017. “Causal Effect Inference with Deep Latent-Variable Models.” In Advances in Neural Information Processing Systems 30, edited by I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, 6446–56. Curran Associates, Inc. http://papers.nips.cc/paper/7223-causal-effect-inference-with-deep-latent-variable-models.pdf.
Lu, Chaochao, Yuhuai Wu, Jośe Miguel Hernández-Lobato, and Bernhard Schölkopf. 2021. “Nonlinear Invariant Risk Minimization: A Causal Approach.” arXiv:2102.12353 [cs, Stat], June. http://arxiv.org/abs/2102.12353.
Mooij, Joris M., Jonas Peters, Dominik Janzing, Jakob Zscheischler, and Bernhard Schölkopf. 2014. “Distinguishing Cause from Effect Using Observational Data: Methods and Benchmarks.” arXiv:1412.3773 [cs, Stat], December. http://arxiv.org/abs/1412.3773.
Ng, Ignavier, Zhuangyan Fang, Shengyu Zhu, Zhitang Chen, and Jun Wang. 2020. “Masked Gradient-Based Causal Structure Learning.” arXiv:1910.08527 [cs, Stat], February. http://arxiv.org/abs/1910.08527.
Ng, Ignavier, Shengyu Zhu, Zhitang Chen, and Zhuangyan Fang. 2019. “A Graph Autoencoder Approach to Causal Structure Learning.” In Advances In Neural Information Processing Systems. http://arxiv.org/abs/1911.07420.
Peters, Jonas, Dominik Janzing, and Bernhard Schölkopf. 2017. Elements of Causal Inference: Foundations and Learning Algorithms. Adaptive Computation and Machine Learning Series. Cambridge, Massachuestts: The MIT Press. https://www.dropbox.com/s/dl/gkmsow492w3oolt/11283.pdf.
Rakesh, Vineeth, Ruocheng Guo, Raha Moraffah, Nitin Agarwal, and Huan Liu. 2018. “Linked Causal Variational Autoencoder for Inferring Paired Spillover Effects.” In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, 1679–82. CIKM ’18. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3269206.3269267.
Rotnitzky, Andrea, and Ezequiel Smucler. 2020. “Efficient Adjustment Sets for Population Average Causal Treatment Effect Estimation in Graphical Models.” Journal of Machine Learning Research 21 (188): 1–86. http://jmlr.org/papers/v21/19-1026.html.
Rubenstein, Paul K., Stephan Bongers, Bernhard Schölkopf, and Joris M. Mooij. 2018. “From Deterministic ODEs to Dynamic Structural Causal Models.” In Uncertainty in Artificial Intelligence. http://arxiv.org/abs/1608.08028.
Schölkopf, Bernhard. 2019. “Causality for Machine Learning.” arXiv:1911.10500 [cs, Stat], December. http://arxiv.org/abs/1911.10500.
Schölkopf, Bernhard, Francesco Locatello, Stefan Bauer, Nan Rosemary Ke, Nal Kalchbrenner, Anirudh Goyal, and Yoshua Bengio. 2021. “Toward Causal Representation Learning.” Proceedings of the IEEE 109 (5): 612–34. https://doi.org/10.1109/JPROC.2021.3058954.
Wang, Yixin, and Michael I. Jordan. 2021. “Desiderata for Representation Learning: A Causal Perspective.” arXiv:2109.03795 [cs, Stat], September. http://arxiv.org/abs/2109.03795.
Yang, Mengyue, Furui Liu, Zhitang Chen, Xinwei Shen, Jianye Hao, and Jun Wang. 2020. “CausalVAE: Disentangled Representation Learning via Neural Structural Causal Models.” arXiv:2004.08697 [cs, Stat], July. http://arxiv.org/abs/2004.08697.
Zhang, Kun, Mingming Gong, Petar Stojanov, Biwei Huang, Qingsong Liu, and Clark Glymour. 2020. “Domain Adaptation as a Problem of Inference on Graphical Models.” In Advances in Neural Information Processing Systems. Vol. 33. https://arxiv.org/abs/2002.03278.
Zhang, Rui, Masaaki Imaizumi, Bernhard Schölkopf, and Krikamol Muandet. 2021. “Maximum Moment Restriction for Instrumental Variable Regression.” arXiv:2010.07684 [cs], February. http://arxiv.org/abs/2010.07684.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.