Bayesian inverse problems



inverse problems that happen to take a probabilistic approach.

This is easy to explain in Bayes terms so I’ll start from there. Say we have a model which gives us the density of a certain output observation \(y\) for a given input \(x\) which we write as \(p(y\mid x)\). By Bayes’ rule we can find the density of inputs for a given observed output by \[p(x \mid y)=\frac{p(x) p(y \mid x)}{p(y)}.\] The process of computing \[p(x \mid y)\] is the most basic of Bayesian inference, nothing special to see here. If the problem is high dimensional, in the sense that \(x\in \mathbb{R}^n\) for \(n\) large and ill-posed, in the sense that, e.g. \(y\in\mathbb{R}^m\) with \(n>m\), we have a particular set of challenges which it is useful to group under the heading of inverse problems.1 A classic example of this class of problem is “What was the true image what was blurred to create this corrupted version?”. Inverse problems arise naturally in tomography, compressed sensing, deconvolution, inverting PDEs and many other areas.

In the world I live in, \(p(y \mid x)\) is not completely specified, but is a regression density with unknown parameters \(\theta\) that we must also learn, that may have prior densities of their own. Maybe I also wish to parameterise the density for the prior on \(x\), \(p(x \mid \lambda),\) which is typically independent of \(\theta.\) Now the model is a hierarchical Bayes model, leading to a directed graphical model factorisation \[p(x,y,\theta,\lambda)=p(\theta)p(\lambda)p(x\mid \lambda) p(y\mid x,\theta).\] We can use more Bayes rule to write the density of interest as \[p(x, \theta, \lambda \mid y) \propto p(y \mid x, \theta)p(x \mid\lambda)p(\lambda)p(\theta).\] Solving this is also, I believe, sometimes called joint inversion. For my applications, we usually want to do this in two phases. In the first, we have some data set of \(N\) input-output pairs indexed by \(i,\) \(\mathcal{D}=\{(x_i, y_i:i=1,\dots,N)\}\) which we use to estimate posterior density \(p(\theta,\lambda \mid \mathcal{D})\) in some learning phase. Thereafter we only ever wish to find \(p(x, \theta, \lambda \mid y,\mathcal{D})\) or possibly even \(p(x \mid y,\mathcal{D})\) but either way do not thereafter update \(\theta, \lambda|\mathcal{D}\).

Bayesian nonparametrics

Since this kind of problem naturally invites functional parameters, we are in the world of Bayesian nonparametrics, which has a slightly different notation than you usually see in Bayes textbooks.

Laplace method

We can use Laplace approximation approximate latent density.

Laplace approximations have the attractive feature of providing estimates also for inverse problems (Breslow and Clayton 1993; Wacker 2017; Alexanderian et al. 2016; Alexanderian 2021) by leveraging the delta method. I think this should come out nice in network linearization approaches such as Foong et al. (2019) and Immer, Korzepa, and Bauer (2021).

Suppose we have a regression network that outputs (perhaps approximately) a Gaussian distribution for outputs given inputs.

TBC

References

Alexanderian, Alen. 2021. Optimal Experimental Design for Infinite-Dimensional Bayesian Inverse Problems Governed by PDEs: A Review.” arXiv:2005.12998 [Math], January.
Alexanderian, Alen, Noemi Petra, Georg Stadler, and Omar Ghattas. 2016. A Fast and Scalable Method for A-Optimal Design of Experiments for Infinite-Dimensional Bayesian Nonlinear Inverse Problems.” SIAM Journal on Scientific Computing 38 (1): A243–72.
Borgonovo, E., W. Castaings, and S. Tarantola. 2012. Model Emulation and Moment-Independent Sensitivity Analysis: An Application to Environmental Modelling.” Environmental Modelling & Software, Emulation techniques for the reduction and sensitivity analysis of complex environmental models, 34 (June): 105–15.
Breslow, N. E., and D. G. Clayton. 1993. Approximate Inference in Generalized Linear Mixed Models.” Journal of the American Statistical Association 88 (421): 9–25.
Bui-Thanh, Tan. 2012. A Gentle Tutorial on Statistical Inversion Using the Bayesian Paradigm.”
Dashti, Masoumeh, and Andrew M. Stuart. 2015. The Bayesian Approach To Inverse Problems.” arXiv:1302.6989 [Math], July.
Foong, Andrew Y. K., Yingzhen Li, José Miguel Hernández-Lobato, and Richard E. Turner. 2019. ‘In-Between’ Uncertainty in Bayesian Neural Networks.” arXiv:1906.11537 [Cs, Stat], June.
Giordano, Matteo, and Richard Nickl. 2020. Consistency of Bayesian Inference with Gaussian Process Priors in an Elliptic Inverse Problem.” Inverse Problems 36 (8): 085001.
Immer, Alexander, Maciej Korzepa, and Matthias Bauer. 2021. Improving Predictions of Bayesian Neural Nets via Local Linearization.” In International Conference on Artificial Intelligence and Statistics, 703–11. PMLR.
Kaipio, Jari, and E. Somersalo. 2005. Statistical and Computational Inverse Problems. Applied Mathematical Sciences. New York: Springer-Verlag.
Kaipio, Jari, and Erkki Somersalo. 2007. Statistical Inverse Problems: Discretization, Model Reduction and Inverse Crimes.” Journal of Computational and Applied Mathematics 198 (2): 493–504.
Kennedy, Marc C., and Anthony O’Hagan. 2001. Bayesian Calibration of Computer Models.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 63 (3): 425–64.
Knapik, B. T., A. W. van der Vaart, and J. H. van Zanten. 2011. Bayesian Inverse Problems with Gaussian Priors.” The Annals of Statistics 39 (5).
Mosegaard, Klaus, and Albert Tarantola. 1995. Monte Carlo Sampling of Solutions to Inverse Problems.” Journal of Geophysical Research 100 (B7): 12431.
———. 2002. Probabilistic Approach to Inverse Problems.” In International Geophysics, 81:237–65. Elsevier.
O’Hagan, A. 2006. Bayesian Analysis of Computer Code Outputs: A Tutorial.” Reliability Engineering & System Safety, The Fourth International Conference on Sensitivity Analysis of Model Output (SAMO 2004), 91 (10): 1290–300.
Plumlee, Matthew. 2017. Bayesian Calibration of Inexact Computer Models.” Journal of the American Statistical Association 112 (519): 1274–85.
Särkkä, Simo, A. Solin, and J. Hartikainen. 2013. Spatiotemporal Learning via Infinite-Dimensional Bayesian Filtering and Smoothing: A Look at Gaussian Process Regression Through Kalman Filtering.” IEEE Signal Processing Magazine 30 (4): 51–61.
Schwab, C., and A. M. Stuart. 2012. Sparse Deterministic Approximation of Bayesian Inverse Problems.” Inverse Problems 28 (4): 045003.
Stuart, A. M. 2010. Inverse Problems: A Bayesian Perspective.” Acta Numerica 19: 451–559.
Tait, Daniel J., and Theodoros Damoulas. 2020. Variational Autoencoding of PDE Inverse Problems.” arXiv:2006.15641 [Cs, Stat], June.
Tarantola, Albert. 2005a. Inverse Problem Theory and Methods for Model Parameter Estimation. SIAM.
———. 2005b. Inverse Problem Theory and Methods for Model Parameter Estimation. SIAM.
———. n.d. Mapping Of Probabilities.
Tonolini, Francesco, Jack Radford, Alex Turpin, Daniele Faccio, and Roderick Murray-Smith. 2020. Variational Inference for Computational Imaging Inverse Problems.” Journal of Machine Learning Research 21 (179): 1–46.
Wacker, Philipp. 2017. Laplace’s Method in Bayesian Inverse Problems.” arXiv:1701.07989 [Math], April.
Wei, Qi, Kai Fan, Lawrence Carin, and Katherine A. Heller. 2017. An Inner-Loop Free Solution to Inverse Problems Using Deep Neural Networks.” arXiv:1709.01841 [Cs], September.
Zammit-Mangion, Andrew, Michael Bertolacci, Jenny Fisher, Ann Stavert, Matthew L. Rigby, Yi Cao, and Noel Cressie. 2021. WOMBAT v1.0: A fully Bayesian global flux-inversion framework.” Geoscientific Model Development Discussions, July, 1–51.

  1. There is also a strand of the literature which refers to any form of Bayesian inference as an inverse problem, but this usage does not draw a helpful distinction so I avoid it.↩︎


No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.