Transforms of Gaussian noise

Delta method, error propagation, unscented transform, Taylor expansion…

I have a nonlinear transformation of a Gaussian process. What is its distribution? Delta methods, influence functions and other locally-Gaussian transformations of noises. A workhorse of Bayesian filtering and smoothing; as such see Simo Särkkä (2013) for a broad introduction to applications.

See transforms of RVs for non-Gaussian results.

Taylor expansion

The 1D Taylor approximation according to Arras (1998)

Not complicated but it can be a little subtle. For a general exposition which handles first and second-order transforms, I recommend Gustafsson and Hendeby (2012), which as a bonus proves some things which seem obvious but are not, in fact, obvious to prove, and disproves some things which seemd obviously true to me. Arras (1998) is possibly the most simple introduction.

Taylor expansion works if the transformation in question is smooth enough and the approximation only needs to be accurate about the expansion point.

Todo: treat expansion point and mean separately.

Consider a general nonlinear differentiable transformation \(g\) and its second order Taylor expansion. We apply \(g:\mathbb{R}^{n_{x}}\to\mathbb{R}^{n_{z}}\) to a variable \(x,\) defining \(z:=g(x).\) Let \(\mathrm{E}(x)=\mu_{x}\) and \(\operatorname{Var}(x)=P_{x}.\) The Hessian of the \(i^{\text {th }}\) component of \(g\) is denoted \(g_{i}^{\prime \prime}.\) \([x_i]_i\) is a vector where the \(i\)th element is \(x_i\). We approximate \(z\) using the Taylor expansion, \[z=g\left(\mu_{x}\right)+g^{\prime}\left(\mu_{x}\right)\left(x-\mu_{x}\right)+\left[\frac{1}{2}\left(x-\mu_{x}\right)^{T} g_{i}^{\prime \prime}\left(\mu_{x}\right)\left(x-\mu_{x}\right)\right]_{i}.\] Leaving aside questions of when this is convergent for now, and assume it is. Then we assert \(z\sim\mathcal{N}(\mu_z,P_z)\). The first moment of \(z\) is given by \[ \mu_{z}=g\left(\mu_{x}\right)+\frac{1}{2}\left[\operatorname{tr}\left(g_{i}^{\prime \prime}\left(\mu_{x}\right) P_{x}\right)\right]_{i} \] Further, let \(x \sim \mathcal{N}\left(\mu_{x}, P_{x}\right)\), then the second moment of \(z\) is given by \[ P_{z}=g^{\prime}\left(\mu_{x}\right) P_{x}\left(g^{\prime}\left(\mu_{x}\right)\right)^{T}+\frac{1}{2}\left[\operatorname{tr}\left(g_{i}^{\prime \prime}\left(\mu_{x}\right) P_{x} g_{j}^{\prime \prime}\left(\mu_{x}\right) P_{x}\right)\right]_{i j} \] with \(i, j=1, \ldots, n_{z}.\)

This approach is finite dimensional, but it also generalises to Gaussian processes, in that we can, at any finite number of test locations, once again find a first order approximation. See the non-parametric case.

Note that here I have assumed that we have the luxury of expanding the distribution about the mean, which would be a factor encouraging me to attempt to get away with only taking the first order Taylor transform. Since I have bothered to take a second-order expansion here, I should give the expansion about an arbitrary point which is not necessarily the mean, for the sake of making the generality worth it.

Question: In what metric, if any, have we minimised the error of our approximation by doing this?

Monte Carlo moment matching

Classic Monte Carlo methods use a sample to approximate the moments of a distribution, as seen in ensemble Kalman methods.

Monte Carlo gradient descent in some metric

If we choose some Monte Carlo method then we can use gradient information to approximate the target in any useful probability metric). This is not special to Gaussian processes, but works with any old stochastic variational method.

In terms of KL

Suppose we consider the approximation problem in terms of Kullback Leibler divergence between the approximation and the truth. TBC.

In terms of Wasserstein


Unscented transform

The great invention of Uhlmann and Julier is the unscented transform, which uses a cunningly-chosen non-random empirical sample at so-called sigma-points to approximate the transformed distribution via its moments. I think that anything using sigma points is an unscented transform? Otherwise it is just garden-variety moment-matching.

Often seen in the context of Kalman filtering.

What the Unscented Transform does is to replace the mean vector and its associated error covariance matrix with a special set of points with the same mean and covariance. In the case of the mean and covariance representing the current position estimate for a target, the UT is applied to obtain a set of points, referred to as sigma points, to which the full nonlinear equations of motion can be applied directly. In other words, instead of having to derive a linearized approximation, the equations could simply be applied to each of the points as if it were the true state of the target. The result is a transformed set of points, and the mean and covariance of that set represents the estimate of the predicted state of the target.

See, e.g., Roth, Hendeby, and Gustafsson (2016) and a comparison with the Taylor expansion in Gustafsson and Hendeby (2012).

Question: What would we need to do to apply the unscented transform to non-Gaussian distributions? See Ebeigbe et al. (2021).


See GP by GD.

Chaos expansions

See chaos expansions.

Gaussian processes

Propagating error of Gaussian process inputs is a functional GP problem. TBD.

Related, propagating error through a GP regression. See Emmanuel Johnson’s Linearized GP site (mildly idiosyncratic notation and very idiosyncratic website navigation).

The following references from Emmanuel Johnson’s lit review look promising: Deisenroth and Mohamed (2012);Girard and Murray-Smith (2003);Ko and Fox (2009) and McHutchon and Rasmussen (2011).

I am curious what, if anything, they add to Murray-Smith and Pearlmutter (2005).


Arras, Kai Oliver. 1998. An Introduction To Error Propagation: Derivation, Meaning and Examples of Equation CY = FX CX FXT,” 22.
Bickson, Danny. 2009. Gaussian Belief Propagation: Theory and Application.” PhD.
Bishop, Adrian N., and Arnaud Doucet. 2014. Distributed Nonlinear Consensus in the Space of Probability Measures.” IFAC Proceedings Volumes, 19th IFAC World Congress, 47 (3): 8662–68.
Calandra, R., J. Peters, C. E. Rasmussen, and M. P. Deisenroth. 2016. Manifold Gaussian Processes for Regression.” In 2016 International Joint Conference on Neural Networks (IJCNN), 3338–45. Vancouver, BC, Canada: IEEE.
Davison, Andrew J., and Joseph Ortiz. 2019. FutureMapping 2: Gaussian Belief Propagation for Spatial AI.” arXiv:1910.14139 [Cs], October.
Deisenroth, Marc Peter, and Shakir Mohamed. 2012. Expectation Propagation in Gaussian Process Dynamical Systems.” In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 2, 25:2609–17. NIPS’12. Red Hook, NY, USA: Curran Associates Inc.
———. 2016. Expectation Propagation in Gaussian Process Dynamical Systems: Extended Version.” arXiv:1207.2940 [Cs, Stat], August.
Ebeigbe, Donald, Tyrus Berry, Michael M. Norton, Andrew J. Whalen, Dan Simon, Timothy Sauer, and Steven J. Schiff. 2021. A Generalized Unscented Transformation for Probability Distributions.” ArXiv, April, arXiv:2104.01958v1.
Girard, Agathe, and Roderick Murray-Smith. 2003. “Learning a Gaussian Process Model with Uncertain Inputs,” 10.
Girard, Agathe, Carl Edward Rasmussen, and Roderick Murray-Smith. 2002. “Gaussian Process Priors with Uncertain Inputs: Multiple-Step-Ahead Prediction,” 18.
Grosse, Roger. 2021. Taylor Approximations.” In CSC2541 Winter 2021, Chapter 2.
Gustafsson, Fredrik, and Gustaf Hendeby. 2008. On Nonlinear Transformations of Stochastic Variables and Its Application to Nonlinear Filtering.” In 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, 3617–20.
———. 2012. Some Relations Between Extended and Unscented Kalman Filters.” IEEE Transactions on Signal Processing 60 (2): 545–55.
Hegde, Pashupati, Markus Heinonen, Harri Lähdesmäki, and Samuel Kaski. 2018. Deep Learning with Differential Gaussian Process Flows.” arXiv:1810.04066 [Cs, Stat], October.
Holderrieth, Peter, Michael J. Hutchinson, and Yee Whye Teh. 2021. Equivariant Learning of Stochastic Fields: Gaussian Processes and Steerable Conditional Neural Processes.” In Proceedings of the 38th International Conference on Machine Learning, 4297–307. PMLR.
Jankowiak, Martin, Geoff Pleiss, and Jacob Gardner. 2020. Deep Sigma Point Processes.” In Conference on Uncertainty in Artificial Intelligence, 789–98. PMLR.
Jin, He-hui, Kenneth L Judd, and Hoover Insitution. n.d. “Perturbation Methods for General Dynamic Stochastic Models,” 44.
Ko, Jonathan, and Dieter Fox. 2009. GP-BayesFilters: Bayesian Filtering Using Gaussian Process Prediction and Observation Models.” In Autonomous Robots, 27:75–90.
Lin, Wu, Mohammad Emtiyaz Khan, and Mark Schmidt. 2019. Stein’s Lemma for the Reparameterization Trick with Exponential Family Mixtures.” arXiv:1910.13398 [Cs, Stat], October.
Liou, Jun-Jih, Yuan-Fong Su, Jie-Lun Chiang, and Ke-Sheng Cheng. 2011. Gamma Random Field Simulation by a Covariance Matrix Transformation Method.” Stochastic Environmental Research and Risk Assessment 25 (2): 235–51.
Majumdar, Rajeshwari, and Suman Majumdar. 2019. On the Conditional Distribution of a Multivariate Normal Given a Transformation – the Linear Case.” Heliyon 5 (2): e01136.
Marzouk, Youssef, Tarek Moselhy, Matthew Parno, and Alessio Spantini. 2016. Sampling via Measure Transport: An Introduction.” In Handbook of Uncertainty Quantification, edited by Roger Ghanem, David Higdon, and Houman Owhadi, 1:1–41. Cham: Springer Heidelberg.
McHutchon, Andrew, and Carl Edward Rasmussen. 2011. Gaussian Process Training with Input Noise.” In Proceedings of the 24th International Conference on Neural Information Processing Systems, 24:1341–49. NIPS’11. Red Hook, NY, USA: Curran Associates Inc.
Meyer, Florian, Ondrej Hlinka, and Franz Hlawatsch. 2014. Sigma Point Belief Propagation.” IEEE Signal Processing Letters 21 (2): 145–49.
Minka, Thomas P. 2001. Expectation Propagation for Approximate Bayesian Inference.” In Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence, 362–69. UAI’01. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.
Murray-Smith, Roderick, and Barak A. Pearlmutter. 2005. Transformations of Gaussian Process Priors.” In Deterministic and Statistical Methods in Machine Learning, edited by Joab Winkler, Mahesan Niranjan, and Neil Lawrence, 110–23. Lecture Notes in Computer Science. Springer Berlin Heidelberg.
Ortiz, Joseph, Talfan Evans, and Andrew J. Davison. 2021. A Visual Introduction to Gaussian Belief Propagation.” arXiv:2107.02308 [Cs], July.
Papadopoulos, G., P.J. Edwards, and A.F. Murray. 2001. Confidence Estimation Methods for Neural Networks: A Practical Comparison.” IEEE Transactions on Neural Networks 12 (6): 1278–87.
Quiñonero-Candela, Joaquin. 2004. “Learning with Uncertainty-Gaussian Processes and Relevance Vector Machines.” Technical University of Denmark, Copenhagen.
Roth, Michael, Gustaf Hendeby, and Fredrik Gustafsson. 2016. Nonlinear Kalman Filters Explained: A Tutorial on Moment Computations and Sigma Point Methods.” Journal of Advances in Information Fusion 11 (1): 47–70.
Ruiz, Francisco J. R., Michalis K. Titsias, and David M. Blei. 2016. The Generalized Reparameterization Gradient.” In Advances In Neural Information Processing Systems.
Särkkä, S., and J. Hartikainen. 2013. Non-Linear Noise Adaptive Kalman Filtering via Variational Bayes.” In 2013 IEEE International Workshop on Machine Learning for Signal Processing (MLSP), 1–6.
Särkkä, Simo. 2007. On Unscented Kalman Filtering for State Estimation of Continuous-Time Nonlinear Systems.” IEEE Transactions on Automatic Control 52 (9): 1631–41.
———. 2013. Bayesian Filtering and Smoothing. Institute of Mathematical Statistics Textbooks 3. Cambridge, U.K. ; New York: Cambridge University Press.
Spantini, Alessio, Ricardo Baptista, and Youssef Marzouk. 2022. Coupling Techniques for Nonlinear Ensemble Filtering.” SIAM Review 64 (4): 921–53.
Tran, Dustin, Mike Dusenberry, Mark van der Wilk, and Danijar Hafner. 2019. “Bayesian Layers: A Module for Neural Network Uncertainty.” Advances in Neural Information Processing Systems 32.
Tran, Dustin, Rajesh Ranganath, and David M. Blei. 2015. The Variational Gaussian Process.” In Proceedings of ICLR.
Wilkinson, William J., Paul E. Chang, Michael Riis Andersen, and Arno Solin. 2020. State Space Expectation Propagation: Efficient Inference Schemes for Temporal Gaussian Processes.” In ICML.
Wilkinson, William J, Paul E Chang, Michael Riis Andersen, and Arno Solin. 2019. “Global Approximate Inference via Local Linearisation for Temporal Gaussian Processes,” 12.
Wilkinson, William J., Simo Särkkä, and Arno Solin. 2021. Bayes-Newton Methods for Approximate Bayesian Inference with PSD Guarantees.” arXiv.
Wolter, Kirk M. 2007. Introduction to Variance Estimation. 2nd ed. Statistics for Social and Behavioral Sciences. New York: Springer.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.