Hierarchical models

DAGs, multilevel models, random coefficient models, mixed effect models…


The classical regression set up: Your process of interest generates observations conditional on certain predictors. The observations (but not predictors) are corrupted by noise.

Hierarchical set up: There is a directed graph of interacting random process interactions, generating the observations you observe, and you would like to reconstruct the parameters, possibly even conditional distributions of parameters.

Known as mixed effects models, hierarchical models, nested models (careful! many definitions to that term), random coefficient models, error-in-variables models.

Directed graphical models provide the formalism for such models. When you mention graphical models, frequently the emphasis is on the independence graph itself, and rather general framings. When you mention hierarchical models it seems to be assumed that you wish to estimate parameters, or sample from posteriors, or what-have-you.

In certain cute cases (i.e. linear, homoskedastic) these problems become deconvolution. (🏗 explain what I mean here and why I bothred to say it.) See ANOVA for an important special case of model. More generally, we sometimes find it convenient to use hierarchical generalised linear models, which have all manner of nice properties for inference, especially for frequentists.

In the case that you have many layers of hidden variables and don’t expect any of them to correspond to a “real” state so much as simply to approximate the unknown function better, you just discovered a deep neural network, possibly even a probabilistic neural network. [@RanzatoModeling2013] (for example) does explicitly.

Thomas Wiecki wrote:

  • The Best Of Both Worlds: Hierarchical Linear Regression in PyMC3

  • Why hierarchical models are awesome, tricky, and Bayesian:

    [… want to take the opportunity to make another point that is not directly related to hierarchical models but can be demonstrated quite well here. Usually when talking about the perils of Bayesian statistics we talk about priors, uncertainty, and flexibility when coding models using Probabilistic Programming. However, an even more important property is rarely mentioned because it is much harder to communicate. [@rosstaylor touched on this point in his tweet](https://twitter.com/rosstaylor90/status/827263854002401281?ref_src=twsrc%5Etfw)

    It’s interesting that many summarize Bayes as being about priors; but real power is its focus on integrals/expectations over maxima/modes

    Michael Betancourt makes a similar point when he says “Expectations are the only thing that make sense.”

    But what’s wrong with maxima/modes? Aren’t those really close to the posterior mean (i.e. the expectation)? Unfortunately, that’s only the case for the simple models we teach to build up intuitions. In complex models, like the hierarchical one, the MAP can be far away and not be interesting or meaningful at all. […]

    This strong divergence of the MAP and the Posterior Mean does not only happen in hierarchical models but also in high dimensional ones, where our intuitions from low-dimensional spaces gets twisted in serious ways. …

    […] Final disclaimer: This might provide the impression that this is a property of being in a Bayesian framework, which is not true. Technically, we can talk about Expectations vs Modes irrespective of that. Bayesian statistics just happens to provide a very intuitive and flexible framework for expressing and estimating these models.

Some of Andrew Gelman’s blog posts on hierarchical models are probably worth inspecting. 1, 2, 3

Implementations

Just see probabilistic programming.