Hierarchical models

DAGs, multilevel models, random coefficient models, mixed effect models…



The classical regression set up: Your process of interest generates observations conditional on certain predictors. The observations (but not predictors) are corrupted by noise.

Hierarchical set up: There is a directed graph of interacting random processes, generating the observations you observe, and you would like to reconstruct the parameters, possibly even conditional distributions of parameters, accounting for interactions.

Known as mixed effects models, hierarchical models, nested models (careful! many definitions to that term), random coefficient models, error-in-variables models.

Directed graphical models provide the formalism for such models. When you mention graphical models, frequently the emphasis is on the independence graph itself, and rather general framings. When you mention hierarchical models it seems to be assumed that you wish to estimate parameters, or sample from posteriors, or what-have-you.

In certain cute cases (i.e. linear, homoskedastic) these problems become deconvolution. (🏗 explain what I mean here and why I bothered to say it.) See ANOVA for an important special case. More generally, we sometimes find it convenient to use hierarchical generalised linear models, which have all manner of nice properties for inference.

In the case that you have many layers of hidden variables and don’t expect any of them to correspond to a “real” state so much as simply to approximate the unknown function better, you just discovered a deep neural network, possibly even a probabilistic neural network. (Ranzato 2013) (for example) does explicitly discusses them in this way.

Thomas Wiecki wrote:

  • The Best Of Both Worlds: Hierarchical Linear Regression in PyMC3

  • Why hierarchical models are awesome, tricky, and Bayesian:

    [… want to take the opportunity to make another point that is not directly related to hierarchical models but can be demonstrated quite well here. Usually when talking about the perils of Bayesian statistics we talk about priors, uncertainty, and flexibility when coding models using Probabilistic Programming. However, an even more important property is rarely mentioned because it is much harder to communicate. @rosstaylor touched on this point in his tweet:

    It’s interesting that many summarize Bayes as being about priors; but real power is its focus on integrals/expectations over maxima/modes

    Michael Betancourt makes a similar point when he says “Expectations are the only thing that make sense.”

    But what’s wrong with maxima/modes? Aren’t those really close to the posterior mean (i.e. the expectation)? Unfortunately, that’s only the case for the simple models we teach to build up intuitions. In complex models, like the hierarchical one, the MAP can be far away and not be interesting or meaningful at all. […]

    This strong divergence of the MAP and the Posterior Mean does not only happen in hierarchical models but also in high dimensional ones, where our intuitions from low-dimensional spaces gets twisted in serious ways. …

    […] Final disclaimer: This might provide the impression that this is a property of being in a Bayesian framework, which is not true. Technically, we can talk about Expectations vs Modes irrespective of that. Bayesian statistics just happens to provide a very intuitive and flexible framework for expressing and estimating these models.

Some of Andrew Gelman’s blog posts on hierarchical models provide helpful context (1, 2, 3).

Teaching

See this nice animated demonstration.

Cluster randomized trials

Melanie Bell, Cluster Randomized Trials

Cluster randomized trials (CRTs) are studies where groups of people, rather than individuals, are randomly allocated to intervention or control. While these type of designs can be appropriate and useful for many research settings, care must be taken to correctly design and analyze them. This talk will give an overview of cluster trials, and various methodological research projects on cluster trials that I’ve been undertaken: designing CRTs, the use of GEE with small number of clusters, handling missing data in CRTs, and analysis using mixed models.

Implementations

Just see probabilistic programming.

References

Blackwell, Matthew, James Honaker, and Gary King. 2015. A Unified Approach to Measurement Error and Missing Data: Details and Extensions.” Sociological Methods & Research, June, 0049124115589052.
Bolker, Benjamin M., Mollie E. Brooks, Connie J. Clark, Shane W. Geange, John R. Poulsen, M. Henry H. Stevens, and Jada-Simone S. White. 2009. Generalized Linear Mixed Models: A Practical Guide for Ecology and Evolution.” Trends in Ecology & Evolution 24 (3): 127–35.
Breslow, N. E., and D. G. Clayton. 1993. Approximate Inference in Generalized Linear Mixed Models.” Journal of the American Statistical Association 88 (421): 9–25.
Bürkner, Paul-Christian. 2018. Advanced Bayesian Multilevel Modeling with the R Package Brms.” The R Journal 10 (1): 395–411.
Chan, Ngai Hang, Ye Lu, and Chun Yip Yau. 2016. Factor Modelling for High-Dimensional Time Series: Inference and Model Selection.” Journal of Time Series Analysis, January, n/a–.
DiTraglia, Francis J., Camilo Garcia-Jimeno, Rossa O’Keeffe-O’Donovan, and Alejandro Sanchez-Becerra. 2020. Identifying Causal Effects in Experiments with Social Interactions and Non-Compliance.” arXiv:2011.07051 [Econ, Stat], November.
Efron, Bradley. 2009. Empirical Bayes Estimates for Large-Scale Prediction Problems.” Journal of the American Statistical Association 104 (487): 1015–28.
Gelman, Andrew. 2006. Multilevel (Hierarchical) Modeling: What It Can and Cannot Do.” Technometrics 48 (3): 432–35.
Gelman, Andrew, Jennifer Hill, and Aki Vehtari. 2021. Regression and other stories. Cambridge, UK: Cambridge University Press.
Gelman, Andrew, Daniel Lee, and Jiqiang Guo. 2015. Stan: A Probabilistic Programming Language for Bayesian Inference and Optimization.” Journal of Educational and Behavioral Statistics 40 (5): 530–43.
Hansen, Christian B. 2007. Generalized Least Squares Inference in Panel and Multilevel Models with Serial Correlation and Fixed Effects.” Journal of Econometrics 140 (2): 670–94.
Koren, Yehuda, Robert Bell, and Chris Volinsky. 2009. Matrix Factorization Techniques for Recommender Systems.” Computer 42 (8): 30–37.
Lee, Youngjo, and John A. Nelder. 2001. Hierarchical Generalised Linear Models: A Synthesis of Generalised Linear Models, Random-Effect Models and Structured Dispersions.” Biometrika 88 (4): 987–1006.
———. 2006. Double Hierarchical Generalized Linear Models (with Discussion).” Journal of the Royal Statistical Society: Series C (Applied Statistics) 55 (2): 139–85.
Li, Yingying, and Per A. Mykland. 2007. Are Volatility Estimators Robust with Respect to Modeling Assumptions? Bernoulli 13 (3): 601–22.
Mallet, A. 1986. A Maximum Likelihood Estimation Method for Random Coefficient Regression Models.” Biometrika 73 (3): 645–56.
McElreath, Richard, and Robert Boyd. 2007. Mathematical Models of Social Evolution: A Guide for the Perplexed. University Of Chicago Press.
Miller, Jane E. 2013. The Chicago Guide to Writing about Multivariate Analysis. Second edition. Chicago Guides to Writing, Editing, and Publishing. Chicago: University of Chicago Press.
Ranzato, M. 2013. Modeling Natural Images Using Gated MRFs.” IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (9): 2206–22.
Reiersol, Olav. 1950. Identifiability of a Linear Relation Between Variables Which Are Subject to Error.” Econometrica 18 (4): 375.
Saefken, Benjamin, Thomas Kneib, Clara-Sophie van Waveren, and Sonja Greven. 2014. A Unifying Approach to the Estimation of the Conditional Akaike Information in Generalized Linear Mixed Models.” Electronic Journal of Statistics 8 (1): 201–25.
Valpine, Perry de. 2011. Frequentist Analysis of Hierarchical Models for Population Dynamics and Demographic Data.” Journal of Ornithology 152 (2): 393–408.
Venables, W. N., and C. M. Dichmont. 2004. GLMs, GAMs and GLMMs: An Overview of Theory for Applications in Fisheries Research.” Fisheries Research, Models in Fisheries Research: GLMs, GAMS and GLMMs, 70 (2–3): 319–37.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.