Neural likelihood inference
Emulating likelihoods with neural networks
2024-04-08 — 2024-04-08
Suspiciously similar content
Incorporating various neural approximations to (functions of) the likelihood of an otherwise-intractable model.
Things I have encountered in the wild:
- Neural Posterior Estimation (amortized NPE and sequential SNPE), (Deistler, Goncalves, and Macke 2022; Glöckler, Deistler, and Macke 2022; Greenberg, Nonnenmacher, and Macke 2019; Papamakarios and Murray 2016)
- Neural Likelihood Estimation ((S)NLE), (Boelts et al. 2022; Lueckmann et al. 2017; Papamakarios, Sterratt, and Murray 2019) and
- Neural Ratio Estimation ((S)NRE) (Delaunoy et al. 2022; Durkan, Murray, and Papamakarios 2020; Hermans, Begy, and Louppe 2020; Miller, Weniger, and Forré 2022) (see also density ratio)
- Neural Bayes predictive
Connects closely to neural processes which target the posterior predictive, and simulation-based inference which targets the case where we have a good but uncalibrated simulator.
A summary of some methods is in Cranmer, Brehmer, and Louppe (2020).
1 Implementations
1.1 sbi
See the Mackelab sbi page for several implementations:
Goal: Algorithmically identify mechanistic models which are consistent with data.
Each of the methods above needs three inputs: A candidate mechanistic model, prior knowledge or constraints on model parameters, and observational data (or summary statistics thereof).
The methods then proceed by
- sampling parameters from the prior followed by simulating synthetic data from these parameters,
- learning the (probabilistic) association between data (or data features) and underlying parameters, i.e. to learn statistical inference from simulated data. The way in which this association is learned differs between the above methods, but all use deep neural networks.
- This learned neural network is then applied to empirical data to derive the full space of parameters consistent with the data and the prior, i.e. the posterior distribution. High posterior probability is assigned to parameters which are consistent with both the data and the prior, low probability to inconsistent parameters. While SNPE directly learns the posterior distribution, SNLE and SNRE need an extra MCMC sampling step to construct a posterior.
- If needed, an initial estimate of the posterior can be used to adaptively generate additional informative simulations.
Code here: mackelab/sbi: Simulation-based inference in PyTorch
Compare to contrastive learning.