# Neural likelihood inference

Emulating likelihoods with neural networks

April 8, 2024 — April 8, 2024

Incorporating various neural approximations to (functions of) the likelihood of an otherwise-intractable model.

Neural Posterior Estimation (amortized NPE and sequential SNPE), (Deistler, Goncalves, and Macke 2022; Glöckler, Deistler, and Macke 2022; Greenberg, Nonnenmacher, and Macke 2019; Papamakarios and Murray 2016)

Neural Likelihood Estimation ((S)NLE), (Boelts et al. 2022; Lueckmann et al. 2017; Papamakarios, Sterratt, and Murray 2019) and

Neural Ratio Estimation ((S)NRE) (Delaunoy et al. 2022; Durkan, Murray, and Papamakarios 2020; Hermans, Begy, and Louppe 2020; Miller, Weniger, and Forré 2022) (see also density ratio)

Neural point estimators:

NeuralEstimators facilitates the user-friendly development of neural point estimators, which are neural networks that transform data into parameter point estimates. They are likelihood free, substantially faster than classical methods, and can be designed to be approximate Bayes estimators. The package caters for any model for which simulation is feasible.

Permutation-invariant neural estimators (Sainsbury-Dale, Zammit-Mangion, and Huser 2022, 2024) which leans on deep sets .

Connects closely to neural processes which target the posterior predictive, and simulation-based inferece which targets the case where we have a good but uncalibrated simulator.

A summary of some methods is in Cranmer, Brehmer, and Louppe (2020).

## 1 Implementations

### 1.1 sbi

See the Mackelab sbi page for several implementations:

Goal: Algorithmically identify mechanistic models which are consistent with data.Each of the methods above needs three inputs: A candidate mechanistic model, prior knowledge or constraints on model parameters, and observational data (or summary statistics thereof).

The methods then proceed by

- sampling parameters from the prior followed by simulating synthetic data from these parameters,
- learning the (probabilistic) association between data (or data features) and underlying parameters, i.e. to learn statistical inference from simulated data. The way in which this association is learned differs between the above methods, but all use deep neural networks.
- This learned neural network is then applied to empirical data to derive the full space of parameters consistent with the data and the prior, i.e. the posterior distribution. High posterior probability is assigned to parameters which are consistent with both the data and the prior, low probability to inconsistent parameters. While SNPE directly learns the posterior distribution, SNLE and SNRE need an extra MCMC sampling step to construct a posterior.
- If needed, an initial estimate of the posterior can be used to adaptively generate additional informative simulations.

Code here: mackelab/sbi: Simulation-based inference in PyTorch

Compare to contrastive learning.

### 1.2 NeuralEstimators

## 2 rsnl

## 3 References

*eLife*.

*Proceedings of the National Academy of Sciences*.

*Proceedings of the 37th International Conference on Machine Learning*. ICML’20.

*Proceedings of the 36th International Conference on Machine Learning*.

*arXiv:1903.04057 [Cs, Stat]*.

*Symposium on Advances in Approximate Bayesian Inference*.

*AISTATS*.

*Proceedings of the 31st International Conference on Neural Information Processing Systems*. NIPS’17.

*Advances in Neural Information Processing Systems 29*.

*Journal of Machine Learning Research*.

*Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics*.

*The American Statistician*.

*arXiv:1808.00973 [Hep-Ph, Physics:physics, Stat]*.