Interaction effects and subgroups are probably what we want to estimate
January 25, 2022 — November 6, 2022
I play fast and loose with language about subgroups and interaction terms here. We can define each in terms of the other often, but they are not quite the same thing.
Estimating interaction effects is hard, but it is probably the most important thing to do in any complex and/or human system. So how do we optimally trade-off answering the most specific questions with the rapidly growing expense and difficulty of experiments large enough to detect them? Also, the rapidly growing number of possible interactions as problems grow.
Connection with problematic methodology, when the need for specificity manifests through researcher degrees of freedom, i.e. choosing which interactions to model post hoc.
That is, the world is probably built of hierarchical models but we do not always have the right data to identify them, or enough of it when we do.
Lots of ill-connected notes ATM.
1 Review of limits of heterogeneous treatment effects literature
Data requirements, false discovery. If we want to learn interaction effects from observational studies then we need heroic amounts of data to eliminate confounders and estimate the explosion of possible terms. Does this mean that by attempting to operate this way we are implicitly demanding a surveillance state?
2 Subgroup identification
Classic experimental practice tries to estimate an effect, then either
- encounters a thicket of onerous multiple testing challenges to do model selection to work out who it applies to, or
- applies for new funding to identify relevant subgroups with new data in a new experiment.
Can we estimate subgroups and effects simultaneously? How bad is our researcher-degrees-of-freedom situation in this case? This question is complicated (Foster, Taylor, and Ruberg 2011; Imai and Ratkovic 2013; Lipkovich, Dmitrienko, and B 2017; Su et al. 2009).
3 Conditional average treatment effect
Working out how to condition on stuff is the bread and butter of causal inference, and there are a bunch of ways to analyse it. TBC
4 As transferability
If we know what interacts with our model, then we are closer to learning the correct conditioning set. See external validity.
5 Ontological context
- Science in a High-Dimensional World
- The “It’s really complicated and sad” theory of obesity.
- interactions are probably always present; they just might be small — see Gwern’s Everything Is Correlated for a roundup on this theme.
6 Scientific context
Over at social psychology, I’ve wondered about Peter Dorman’s comment:
the fixation on finding average effects when the structure of effect differences is what we ought to be interested in.
See Slime Mold Time Mold, Reality is Very Weird and You Need to be Prepared for That
But as we see from the history of scurvy, sometimes splitting is the right answer! In fact, there were meaningful differences in different kinds of citrus, and meaningful differences in different animals. Making a splitting argument to save a theory — “maybe our supplier switched to a different kind of citrus, we should check that out” — is a reasonable thing to do, especially if the theory was relatively successful up to that point.
Splitting is perfectly fair game, at least to an extent — doing it a few times is just prudent, though if you have gone down a dozen rabbitholes with no luck, then maybe it is time to start digging elsewhere.
Much commentary from Andrew Gelman et al on this theme. e.g. You need 16 times the sample size to estimate an interaction than to estimate a main effect (Gelman, Hill, and Vehtari 2021 ch 16.4).
C&C Epstein Barr and the Cause of Cause
Miller (2013) writes about basic data hygiene in this light for data journalists etc.
7 Spicy take: actually, how about optimal int
9 Incoming
Kernel tricks for detecting 2 way interactions: Agrawal et al. (2019);Agrawal and Broderick (2021) See Tamara Broderick present this.
The Big Data Paradox in Clinical Practice (Msaouel 2022)
The big data paradox is a real-world phenomenon whereby as the number of patients enrolled in a study increases, the probability that the confidence intervals from that study will include the truth decreases. This occurs in both observational and experimental studies, including randomised clinical trials, and should always be considered when clinicians are interpreting research data. Furthermore, as data quantity continues to increase in today’s era of big data, the paradox is becoming more pernicious. Herein, I consider three mechanisms that underlie this paradox, as well as three potential strategies to mitigate it: (1) improving data quality; (2) anticipating and modelling patient heterogeneity; (3) including the systematic error, not just the variance, in the estimation of error intervals.
8 Social context
8.1 Is this what intersectionality means?
A real question. If we are concerned with the inequality, then there is an implied graphical model which produces as outputs different outcomes based on who is being modelled, and these will have implications with regard to fairness.
It turns out people have engaged meaningfully in this. Bright, Malinsky, and Thompson (2016) suggests some testable models:
8.2 The advice you found is probably not for you
Every pundit has a model for what the typical member of the public thinks, and directs their advice accordingly. For many reasons, the pundit’s model is likely to be wrong. The readers of various pundits are a self-selecting sample, and the pundit’s intuitive model of society is distorted and even if they surveyed their readership, it is hard to use that to know anything truly about the readership.
So all advice like “People should do more X” is suspect, because the advice is based on the author’s assumption that the readers are in class A but they in fact could easily be in class B, who maybe should do less X, possibly because X does not work for class B people in general, or because class B people are generally likely to have done too much X and maybe need to lay off the X for a while. See adverse advice selection.