Learning summary statistics



A dimensionality reduction/feature engineering trick specific to the needs of likelihood-free inference methods such as indirect inference or approximate Bayes computation. In these context it is not just the summary statistic in isolation tbe considered but its relationship to a distance measure between this summary statistic for the observations and the model simulation. We would like both of these to be tractable in combination. A limiting case of learnable coarse graining?

TBD. See de Castro and Dorigo (2019):

Simulator-based inference is currently at the core of many scientific fields, such as population genetics, epidemiology, and experimental particle physics. In many cases the implicit generative procedure defined in the simulation is stochastic and/or lacks a tractable probability density \(p(x|\theta)\), where \(\theta in \Theta\) is the vector of model parameters. Given some experimental observations \(D = \{x_0, \dots, x_n \},\) a problem of special relevance for these disciplines is statistical inference on a subset of model parameters \(\omega \in \Omega \subseteq \Theta.\) This can be approached via likelihood-free inference algorithms such as Approximate Bayesian Computation (ABC), simplified synthetic likelihoods or density estimation-by-comparison approaches. Because the relation between the parameters of the model and the data is only available via forward simulation, most likelihood-free inference algorithms tend to be computationally expensive due to the need of repeated simulations to cover the parameter space. When data are high-dimensional, likelihood-free inference can rapidly become inefficient, so low-dimensional summary statistics \(s(D)\) are used instead of the raw data for tractability. The choice of summary statistics for such cases becomes critical, given that naive choices might cause loss of relevant information and a corresponding degradation of the power of resulting statistical inference. As a motivating example we consider data analyses at the Large Hadron Collider (LHC), such as those carried out to establish the discovery of the Higgs boson. In that framework, the ultimate aim is to extract information about Nature from the large amounts of high-dimensional data on the subatomic particles produced by energetic collision of protons, and acquired by highly complex detectors built around the collision point. Accurate data modelling is only available via stochastic simulation of a complicated chain of physical processes, from the underlying fundamental interaction to the subsequent particle interactions with the detector elements and their readout. As a result, the density \(p(x|\theta)\) cannot be analytically computed.

There is a very different approach in Edwards and Storkey (2017).

An efficient learner is one who reuses what they already know to tackle a new problem. For a machine learner, this means understanding the similarities amongst datasets. In order to do this, one must take seriously the idea of working with datasets, rather than datapoints, as the key objects to model. Towards this goal, we demonstrate an extension of a variational autoencoder that can learn a method for computing representations, or statistics, of datasets in an unsupervised fashion. The network is trained to produce statistics that encapsulate a generative model for each dataset. Hence the network enables efficient learning from new datasets for both unsupervised and supervised tasks. We show that we are able to learn statistics that can be used for: clustering datasets, transferring generative models to new datasets, selecting representative samples of datasets and classifying previously unseen classes. We refer to our model as a neural statistician, and by this we mean a neural network that can learn to compute summary statistics of datasets without supervision.

I wonder if this neural statistician solves any problem to the aforementioned goal of simulation-based inferences?

incoming

Can we make neural stistics via deep sets?

References

Aeschbacher, Simon, Mark A Beaumont, and Andreas Futschik. 2012. β€œA Novel Approach for Choosing Summary Statistics in Approximate Bayesian Computation.” Genetics 192 (3): 1027–47.
Γ…kesson, Mattias, Prashant Singh, Fredrik Wrede, and Andreas Hellander. 2020. β€œConvolutional Neural Networks as Summary Statistics for Approximate Bayesian Computation,” January.
Bertl, Johanna, Gregory Ewing, Carolin Kosiol, and Andreas Futschik. 2015. β€œApproximate Maximum Likelihood Estimation.” arXiv:1507.04553 [Stat], July.
Castro, Pablo de, and Tommaso Dorigo. 2019. β€œINFERNO: Inference-Aware Neural Optimisation.” Computer Physics Communications 244 (November): 170–79.
Chen, Boyuan, Kuang Huang, Sunand Raghupathi, Ishaan Chandratreya, Qiang Du, and Hod Lipson. 2022. β€œAutomated Discovery of Fundamental Variables Hidden in Experimental Data.” Nature Computational Science 2 (7): 433–42.
Drovandi, Christopher C., Anthony N. Pettitt, and Malcolm J. Faddy. 2011. β€œApproximate Bayesian Computation Using Indirect Inference.” Journal of the Royal Statistical Society: Series C (Applied Statistics) 60 (3): 317–37.
Drovandi, Christopher C., Anthony N. Pettitt, and Roy A. McCutchan. 2016. β€œExact and Approximate Bayesian Inference for Low Integer-Valued Time Series Models with Intractable Likelihoods.” Bayesian Analysis 11 (2): 325–52.
Edwards, Harrison, and Amos Storkey. 2017. β€œTowards a Neural Statistician.” In Proceedings of ICLR.
Fearnhead, Paul, and Dennis Prangle. 2012. β€œConstructing Summary Statistics for Approximate Bayesian Computation: Semi-Automatic Approximate Bayesian Computation.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 74 (3): 419–74.
Hahn, P. Richard, and Carlos M. Carvalho. 2015. β€œDecoupling Shrinkage and Selection in Bayesian Linear Models: A Posterior Summary Perspective.” Journal of the American Statistical Association 110 (509): 435–48.
Hoffmann, Till, and Jukka-Pekka Onnela. 2022. β€œMinimizing the Expected Posterior Entropy Yields Optimal Summary Statistics.” arXiv.
Kulkarni, Tejas D., Will Whitney, Pushmeet Kohli, and Joshua B. Tenenbaum. 2015. β€œDeep Convolutional Inverse Graphics Network.” arXiv:1503.03167 [Cs], March.
Nunes, Matthew A., and David J. Balding. 2010. β€œOn Optimal Selection of Summary Statistics for Approximate Bayesian Computation.” Statistical Applications in Genetics and Molecular Biology 9 (1).
Pacchiardi, Lorenzo, Pierre KΓΌnzli, Marcel SchΓΆngens, Bastien Chopard, and Ritabrata Dutta. 2021. β€œDistance-Learning For Approximate Bayesian Computation To Model a Volcanic Eruption.” Sankhya B 83 (1): 288–317.
Prangle, Dennis. 2015. β€œSummary Statistics in Approximate Bayesian Computation,” December.
Sisson, Scott A., Yanan Fan, and Mark Beaumont. 2018. Handbook of Approximate Bayesian Computation. CRC Press.
Stein, Michael L., Zhiyi Chi, and Leah J. Welty. 2004. β€œApproximating Likelihoods for Large Spatial Data Sets.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 66 (2): 275–96.
Wong, Wing, Bai Jiang, Tung-yu Wu, and Charles Zheng. 2018. β€œLearning Summary Statistic for Approximate Bayesian Computation via Deep Neural Network.” Statistica Sinica 27 (4).

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.