# Transforms of RVs I have a nonlinear transformation of a random process. What is its distribution?

Related: What is the gradient of the transform? That is the topic of the reparameterization trick.

## Taylor expansion

Not complicated but subtle .

Consider a general nonlinear differentiable transformation $$g$$ and its second order Taylor expansion. Consider the mapping $$g:\mathbb{R}^{n_{x}}\to\mathbb{R}^{n_{z}}$$ applied to a variable $$x,$$ defining $$z:=g(x).$$ Let $$\mathrm{E}(x)=\mu_{x}$$ and $$\operatorname{Var}(x)=P_{x}.$$ The Hessian of the $$i^{\text {th }}$$ component of $$g$$ is denoted $$g_{i}^{\prime \prime}.$$ $$[x_i]_i$$ is a vector where the $$i$$th element is $$x_i$$. We will approximate $$z$$ using the Taylor expansion, $z=g\left(\mu_{x}\right)+g^{\prime}\left(\mu_{x}\right)\left(x-\mu_{x}\right)+\left[\frac{1}{2}\left(x-\mu_{x}\right)^{T} g_{i}^{\prime \prime}\left(\mu_{x}\right)\left(x-\mu_{x}\right)\right]_{i}.$ Leaving aside questions of when this is convergent for now. Then the first moment of $$z$$ is given by $\mu_{z}=g\left(\mu_{x}\right)+\frac{1}{2}\left[\operatorname{tr}\left(g_{i}^{\prime \prime}\left(\mu_{x}\right) P_{x}\right)\right]_{i}$ Further, let $$x \sim \mathcal{N}\left(\mu_{x}, P_{x}\right)$$, then the second moment of $$z$$ is given by $P_{z}=g^{\prime}\left(\mu_{x}\right) P_{x}\left(g^{\prime}\left(\mu_{x}\right)\right)^{T}+\frac{1}{2}\left[\operatorname{tr}\left(g_{i}^{\prime \prime}\left(\mu_{x}\right) P_{x} g_{j}^{\prime \prime}\left(\mu_{x}\right) P_{x}\right)\right]_{i j}$ with $$i, j=1, \ldots, n_{z}.$$

## Unscented transform

The great invention of Uhlmann and Julier is the unscented transform, which uses a cunningly-chosen non-random empirical sample at so-called $$\sigma$$-points to approximate the transformed distribution via its moments.

Mostly seen in the context of Kalman filtering,

What the Unscented Transform does is to replace the mean vector and its associated error covariance matrix with a special set of points with the same mean and covariance. In the case of the mean and covariance representing the current position estimate for a target, the UT is applied to obtain a set of points, referred to as sigma points, to which the full nonlinear equations of motion can be applied directly. In other words, instead of having to derive a linearized approximation, the equations could simply be applied to each of the points as if it were the true state of the target. The result is a transformed set of points, and the mean and covariance of that set represents the estimate of the predicted state of the target.

See, e.g., and a comparison with the Taylor expansion in .

## Stein’s lemma

As seen in Stein’s method. Gives you the special case of certain exponential RVS (typically Gaussian) under certain matched transforms.

## Stochastic Itô-Taylor expansion

Taylor expansions for stochastic processes. See stochastic taylor expansion. tl;dr: Usually more trouble than it is worth.

### No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.