Causal inference in learning to act
2026-01-13 — 2026-01-13
Wherein reinforcement learning is described as being aligned with causal modelling, and it is pointed out that offline learning is highlighted for causal advantage because others’ data may introduce confounders.
Learning to act clearly sounds like learning counterfactuals and interventions, right? (“What would happen if I took this action?”) So surely there’s a causal angle? Yes.
1 Reinforcement learning
The simplest analysis applies to bandit problems (Lattimore 2017).
Since then, there have been many more developments:
Oberst and Sontag (2019):
We introduce an off-policy evaluation procedure for highlighting episodes where applying a reinforcement learned (RL) policy is likely to have produced a substantially different outcome than the observed policy. In particular, we introduce a class of structural causal models (SCMs) for generating counterfactual trajectories in finite partially observable Markov Decision Processes (POMDPs).
Schulte and Poupart (2024):
Reinforcement learning (RL) and causal modelling naturally complement each other. The goal of causal modelling is to predict the effects of interventions in an environment, while the goal of reinforcement learning is to select interventions that maximize the rewards the agent receives from the environment. Reinforcement learning includes the two most powerful sources of information for estimating causal relationships: temporal ordering and the ability to act on an environment. This paper examines which reinforcement learning settings we can expect to benefit from causal modelling, and how. In online learning, the agent has the ability to interact directly with their environment, and learn from exploring it. Our main argument is that in online learning, conditional probabilities are causal, and therefore offline RL is the setting where causal learning has the most potential to make a difference. Essentially, the reason is that when an agent learns from their own experience, there are no unobserved confounders that influence both the agent’s own exploratory actions and the rewards they receive. Our paper formalizes this argument. For offline RL, where an agent may and typically does learn from the experience of others, we describe previous and new methods for leveraging a causal model, including support for counterfactual queries.
