I have a nonlinear transformation of a random process. What is its distribution?

Related: What is the gradient of the transform? That is the topic of the reparameterization trick.

## Taylor expansion

Not complicated but subtle (Gustafsson and Hendeby 2012).

Consider a general nonlinear differentiable transformation \(g\) and its second order Taylor expansion. Consider the mapping \(g:\mathbb{R}^{n_{x}}\to\mathbb{R}^{n_{z}}\) applied to a variable \(x,\) defining \(z:=g(x).\) Let \(\mathrm{E}(x)=\mu_{x}\) and \(\operatorname{Var}(x)=P_{x}.\) The Hessian of the \(i^{\text {th }}\) component of \(g\) is denoted \(g_{i}^{\prime \prime}.\) \([x_i]_i\) is a vector where the \(i\)th element is \(x_i\). We will approximate \(z\) using the Taylor expansion, \[z=g\left(\mu_{x}\right)+g^{\prime}\left(\mu_{x}\right)\left(x-\mu_{x}\right)+\left[\frac{1}{2}\left(x-\mu_{x}\right)^{T} g_{i}^{\prime \prime}\left(\mu_{x}\right)\left(x-\mu_{x}\right)\right]_{i}.\] Leaving aside questions of when this is convergent for now. Then the first moment of \(z\) is given by \[ \mu_{z}=g\left(\mu_{x}\right)+\frac{1}{2}\left[\operatorname{tr}\left(g_{i}^{\prime \prime}\left(\mu_{x}\right) P_{x}\right)\right]_{i} \] Further, let \(x \sim \mathcal{N}\left(\mu_{x}, P_{x}\right)\), then the second moment of \(z\) is given by \[ P_{z}=g^{\prime}\left(\mu_{x}\right) P_{x}\left(g^{\prime}\left(\mu_{x}\right)\right)^{T}+\frac{1}{2}\left[\operatorname{tr}\left(g_{i}^{\prime \prime}\left(\mu_{x}\right) P_{x} g_{j}^{\prime \prime}\left(\mu_{x}\right) P_{x}\right)\right]_{i j} \] with \(i, j=1, \ldots, n_{z}.\)

## Unscented transform

The great invention of Uhlmann and Julier is the *unscented transform*, which uses a cunningly-chosen non-random empirical sample at so-called *\(\sigma\)-points* to approximate the transformed distribution via its moments.

Mostly seen in the context of Kalman filtering,

What the Unscented Transform does is to replace the mean vector and its associated error covariance matrix with a special set of points with the same mean and covariance. In the case of the mean and covariance representing the current position estimate for a target, the UT is applied to obtain a set of points, referred to as sigma points, to which the full nonlinear equations of motion can be applied directly. In other words, instead of having to derive a linearized approximation, the equations could simply be applied to each of the points as if it were the true state of the target. The result is a transformed set of points, and the mean and covariance of that set represents the estimate of the predicted state of the target.

See, e.g., Roth, Hendeby, and Gustafsson (2016) and a comparison with the Taylor expansion in Gustafsson and Hendeby (2012).

## Stein’s lemma

As seen in Stein’s method. Gives you the special case of certain exponential RVS (typically Gaussian) under certain matched transforms.

## Stochastic Itô-Taylor expansion

Taylor expansions for stochastic processes.
See stochastic taylor expansion.
**tl;dr**: Usually more trouble than it is worth.

## References

*Handbook of Financial Econometrics: Tools and Techniques*, 1–66. Elsevier.

*Malaysian Journal of Fundamental and Applied Sciences*13 (3).

*Journal of Economic Dynamics and Control*25 (6-7): 979–99.

*2008 IEEE International Conference on Acoustics, Speech and Signal Processing*, 3617–20.

*IEEE Transactions on Signal Processing*60 (2): 545–55.

*Lévy Processes: Theory and Applications*, edited by Ole E. Barndorff-Nielsen, Sidney I. Resnick, and Thomas Mikosch, 139–68. Boston, MA: Birkhäuser.

*Mathematische Nachrichten*151 (1): 33–50.

*Stochastic Analysis and Applications*10 (4): 431–41.

*Numerical Solution of Stochastic Differential Equations*, edited by Peter E. Kloeden and Eckhard Platen, 161–226. Applications of Mathematics. Berlin, Heidelberg: Springer.

*Numerical Solution of Stochastic Differential Equations*. Berlin, Heidelberg: Springer Berlin Heidelberg.

*arXiv:1910.13398 [Cs, Stat]*, October.

*arXiv:0906.5581 [Math, q-Fin]*, October.

*Stochastic Analysis and Applications*22 (6): 1553–76.

*Journal of Economic Dynamics and Control*28 (4): 755–75.

*Introduction to Variance Estimation*. 2nd ed. Statistics for Social and Behavioral Sciences. New York: Springer.

## No comments yet. Why not leave one?