Causal inference in learning to act

2026-01-13 — 2026-01-13

Wherein reinforcement learning is described as being aligned with causal modelling, and it is pointed out that offline learning is highlighted for causal advantage because others’ data may introduce confounders.

adaptive
agents
bandit problems
control
incentive mechanisms
learning
networks
stochastic processes
time series
utility

Learning to act clearly sounds like learning counterfactuals and interventions, right? (“What would happen if I took this action?”) So surely there’s a causal angle? Yes.

Figure 1

1 Reinforcement learning

The simplest analysis applies to bandit problems (Lattimore 2017).

Since then, there have been many more developments:

Oberst and Sontag (2019):

We introduce an off-policy evaluation procedure for highlighting episodes where applying a reinforcement learned (RL) policy is likely to have produced a substantially different outcome than the observed policy. In particular, we introduce a class of structural causal models (SCMs) for generating counterfactual trajectories in finite partially observable Markov Decision Processes (POMDPs).

Schulte and Poupart (2024):

Reinforcement learning (RL) and causal modelling naturally complement each other. The goal of causal modelling is to predict the effects of interventions in an environment, while the goal of reinforcement learning is to select interventions that maximize the rewards the agent receives from the environment. Reinforcement learning includes the two most powerful sources of information for estimating causal relationships: temporal ordering and the ability to act on an environment. This paper examines which reinforcement learning settings we can expect to benefit from causal modelling, and how. In online learning, the agent has the ability to interact directly with their environment, and learn from exploring it. Our main argument is that in online learning, conditional probabilities are causal, and therefore offline RL is the setting where causal learning has the most potential to make a difference. Essentially, the reason is that when an agent learns from their own experience, there are no unobserved confounders that influence both the agent’s own exploratory actions and the rewards they receive. Our paper formalizes this argument. For offline RL, where an agent may and typically does learn from the experience of others, we describe previous and new methods for leveraging a causal model, including support for counterfactual queries.

2 References

Caniglia, Murray, Hernán, et al. n.d. Estimating Optimal Dynamic Treatment Strategies Under Resource Constraints Using Dynamic Marginal Structural Models.” Statistics in Medicine.
Cao, Feng, Fang, et al. 2024. Towards Empowerment Gain Through Causal Structure Learning in Model-Based Reinforcement Learning.” In.
———, et al. 2025. Towards Empowerment Gain Through Causal Structure Learning in Model-Based RL.”
Cao, Feng, Huo, et al. 2025. Causal Action Empowerment for Efficient Reinforcement Learning in Embodied Agents.” Science China Information Sciences.
Duong, Gupta, and Nguyen. 2024. Causal Discovery via Bayesian Optimization.” In.
Fernández-Loría, and Provost. 2021. Causal Decision Making and Causal Effect Estimation Are Not the Same… and Why It Matters.”
Halpern. 2016. Actual causality.
Kekić, Schneider, Büchler, et al. 2025. Learning Nonlinear Causal Reductions to Explain Reinforcement Learning Policies.”
Lattimore. 2017. Learning How to Act: Making Good Decisions with Machine Learning.”
Meulemans, Schug, Kobayashi, et al. 2023. Would I Have Gotten That Reward? Long-Term Credit Assignment by Counterfactual Contribution Analysis.”
Oberst, and Sontag. 2019. Counterfactual Off-Policy Evaluation with Gumbel-Max Structural Causal Models.” In Proceedings of the 36th International Conference on Machine Learning.
Robertson, Reuter, Guo, et al. 2025. Do-PFN: In-Context Learning for Causal Effect Estimation.”
Schulte, and Poupart. 2024. Why Online Reinforcement Learning Is Causal.”
Shpitser, and Pearl. n.d. “Complete Identification Methods for the Causal Hierarchy.”
Thornley. 2024. The Shutdown Problem: An AI Engineering Puzzle for Decision Theorists.” Philosophical Studies.
Yao, and Mooij. 2025. Σ-Maximal Ancestral Graphs.”