# Reparameterization methods for MC gradient estimation

Pathwise gradient estimation,

April 4, 2018 — May 2, 2023

Reparameterization trick. A trick where we cleverly transform RVs to sample from tricky target distributions, and their jacobians, via a “nice” nice source distribution. Useful in e.g. variational inference, especially autoencoders, for density estimation in probabilistic deep learning. Pairs well with normalizing flows to get powerful target distributions. Storchastic credits pathwise gradients to Glasserman and Ho (1991) as *perturbation analysis*. According to Bloem-Reddy and Teh (2020) the reparameterisation trick is an application of noise outsourcing.

## 1 Tutorials

- Shakir Mohamed, Machine Learning Trick of the Day (4): Reparameterisation Tricks:

Suppose we want the gradient of an expectation of a smooth function \(f\): \[ \nabla_\theta \mathbb {E}_{p(z; \theta)}[f (z)]=\nabla_\theta \int p(z; \theta) f (z) d z \] […] This gradient is often difficult to compute because the integral is typically unknown and the parameters \(\theta\), with respect to which we are computing the gradient, are of the distribution \(p(z; \theta)\).

Now we suppose that we know some function \(g\) such that for some easy distribution \(p(\epsilon)\), \(z | \theta=g(\epsilon, \theta)\). Now we can try to estimate the gradient of the expectation by Monte Carlo:

\[ \nabla_\theta \mathbb {E}_{p(z; \theta)}[f (z)]=\mathbb {E}_{p (c)}\left[\nabla_\theta f(g(\epsilon, \theta))\right] \] Let’s derive this expression and explore the implications of it for our optimisation problem. One-liners give us a transformation from a distribution \(p(\epsilon)\) to another \(p (z)\), thus the differential area (mass of the distribution) is invariant under the change of variables. This property implies that: \[ p (z)=\left|\frac{d \epsilon}{d z}\right|-p(\epsilon) \Longrightarrow-p (z) d z|=|p(\epsilon) d \epsilon| \] Re-expressing the troublesome stochastic optimisation problem using random variate reparameterisation, we find: \[ \begin{aligned} \nabla_\theta \mathbb {E}_{p(z; \theta)}[f (z)] &=\nabla_\theta \int p(z; \theta) f (z) d z \\ &= \nabla_\theta \int p(\epsilon) f (z) d \epsilon\\ &=\nabla_\theta \int p(\epsilon) f(g(\epsilon, \theta)) d \epsilon \\ &=\nabla_\theta \mathbb {E}_{p (c)}[f(g(\epsilon, \theta))]\\ &=\mathbb {E}_{p (e)}\left[\nabla_\theta f(g(\epsilon, \theta))\right] \end{aligned} \]

Yuge Shi’s variational inference tutorial is a tour of cunning reparameterisation gradient tricks written for her paper Shi et al. (2019). She punts some details to Mohamed et al. (2020) which in turn tells me that this adventure continues at Monte Carlo gradient estimation, Figurnov, Mohamed, and Mnih (2018), Devroye (2006) and Jankowiak and Obermeyer (2018).

## 2 Normalizing flows

Cunning reparameterization maps with desirable properties for nonparametric density inference. See normalizing flows.

## 3 General measure transport

See transport maps.

## 4 Tooling

## 5 Incoming

Universal representation theorems? Probably many, here are some I saw: Perekrestenko, Müller, and Bölcskei (2020); Perekrestenko, Eberhard, and Bölcskei (2021).

## 6 References

*Gradient Flows: In Metric Spaces and in the Space of Probability Measures*. Lectures in Mathematics. ETH Zürich.

*arXiv:1707.01069 [Cs, Stat]*.

*Advances in Neural Information Processing Systems*.

*arXiv:2105.04471 [Cs, Stat]*.

*arXiv:1709.01179 [Stat]*.

*Advances in Neural Information Processing Systems 31*.

*Simulation*. Handbooks in Operations Research and Management Science.

*Advances In Neural Information Processing Systems*.

*Advances in Neural Information Processing Systems 31*.

*Gradient Estimation Via Perturbation Analysis*.

*arXiv:1810.01367 [Cs, Stat]*.

*arXiv:1804.00779 [Cs, Stat]*.

*International Conference on Machine Learning*.

*Advances in Neural Information Processing Systems 31*.

*Advances in Neural Information Processing Systems 29*.

*Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2*. NIPS’15.

*ICLR 2014 Conference*.

*arXiv:2010.01155 [Cs, Stat]*.

*arXiv:1910.13398 [Cs, Stat]*.

*PMLR*.

*Advances in Neural Information Processing Systems*.

*Handbook of Uncertainty Quantification*.

*arXiv:2003.08063 [Cs, Math, Stat]*.

*Journal of Machine Learning Research*.

*arXiv:2007.00248 [Stat]*.

*Advances in Neural Information Processing Systems 30*.

*Journal of Machine Learning Research*.

*Partial Differential Equations and Applications*.

*Neural Computation*.

*International Conference on Machine Learning*. ICML’15.

*Proceedings of ICML*.

*arXiv:1302.5125 [Cs, Stat]*.

*Advances In Neural Information Processing Systems*.

*arXiv:1911.03393 [Cs, Stat]*.

*SIAM Review*.

*Journal of Machine Learning Research*.

*Communications on Pure and Applied Mathematics*.

*Communications in Mathematical Sciences*.

*UAI18*.

*Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)*.

*Proceedings of The 24th International Conference on Artificial Intelligence and Statistics*.

*arXiv:1809.10330 [Cs, Stat]*.

*arXiv:2101.12353 [Cs, Math, Stat]*.

*arXiv:1801.07922 [Math]*.

*Journal of Geophysical Research: Solid Earth*.