The importance of possessive apostrophes
A famous generic method for approximating distributions and quantifying discrepancy and manufacturing concentration bounds and limit theorems is Stein’s method, typically in the form of the method of exchangeable pairs (Stein 1986, 1972). Wikipedia will do as a rough intro for now, although their info is rather out-of-date. There does not seem to be a thorough introduction to all the modern and useful tools. Different bits are introduced in Barbour and Chen (2005);Chatterjee (2014);Meckes (2012);Ross (2011).
Chen Soon Ong says:
If we have a distribution P, and we want to measure the distance to P from another distribution Q (which we control), an interesting trick to measure this distance is to define an operator T. This operator T, called the Stein operator, allows us to measure the distance between distributions by considering the distance between test functions on the random variables corresponding to P and Q. This is a more general structure than integral probability metrics, which in turn is a more general version of Wasserstein distance.
Probably the best starting intro is Lily Li’s Whirlwind Tour
In these notes we summarize Cindy Zhang’s survey (Zhang 2016) of Nathan Ross’ survey (Ross 2011) on the Fundamentals of Stein’s Method with particular emphasis on the proof of the Central Limit Theorem.
See also video lecture "The Stein-Chen method" Dr Fraser Daly lec.1
Elizabeth Meckes, Stein’s Method: - The last gadget under the hood finally crystallizes it for me:
the Big Idea: The Stein Equation We need to solve the Stein equation: given a function \(g\), find \(f\) such that \[ T_o f(x)=g(x)-\mathbb{E} g(X) . \] We use \(U_0\) to denote the operator that gives the solution of the Stein equation: \[ f(x)=U_o g(x) . \] If \(f=U_o g\), observe that \[ \mathbb{E} T_o f(Y)=\mathbb{E} g(Y)-\mathbb{E} g(X) . \] Thus if \(\mathbb{E} T_0 f(Y)\) is small, then \(\mathbb{E} g(Y)-\mathbb{E} g(X)\) is small.
This leads naturally to notions of distance between the random variables \(X\) and \(Y\) which can be expressed in the form \[ d(X, Y)=\sup _{\mathcal{F}}|\mathbb{E} g(X)-\mathbb{E} g(Y)|, \] where the supremum is over some class \(\mathcal{F}\) of test functions \(g\). Examples: \[ \begin{aligned} & \mathcal{F}=\left\{f:\left\|f^{\prime}\right\|_{\infty} \leq 1\right\} \quad \longleftrightarrow \text { Wasserstein distance. } \\ & \mathcal{F}=\left\{f:\|f\|_{\infty}+\left\|f^{\prime}\right\|_{\infty} \leq 1\right\} \quad \longleftrightarrow \text { bounded } \\ & \text { Lipschitz distance. } \\ & \end{aligned} \]
Instead of trying to estimate the distance between \(X\) and \(Y\) directly, the problem has been reduced to trying to estimate \(\mathbb {E} T_o f (Y)\) for some large class of functions \(f\). Why is this any better?
Various techniques are in use for trying to estimate \(\mathbb {E} T_o f (Y)\). Among them: - The method of exchangeable pairs (e.g. Stein’s book) - The dependency graph method (e.g. Arratia, Goldstein, and Gordon or Barbour, Karoński, and Ruciński) - Size-bias coupling (e.g. Goldstein and Rinott) - Zero-bias coupling (e.g. Goldstein and Reinert) - The generator method (Barbour)
Stein operators
Gaussian
The original form, Stein’s lemma (Stein 1972) gives use the Stein operator for the Gaussian distribution in particular. Meckes (2009) explains:
The normal distribution is the unique probability measure \(\mu\) for which
\[
\int\left[f^{\prime}(x)-x f(x)\right] \mu(d x)=0
\]
for all \(f\) for which the left-hand side exists and is finite.
It is useful to think of this in terms of operators, specifically,
the operator \(\mathcal{A}_{o}\) defined on \(C^{1}\) functions by
\[
\mathcal{A}_{o} f(x)=f^{\prime}(x)-x f(x)
\]
is called the characterizing operator of the standard normal distribution.
This is incredibly useful in probability approximation by Gaussians where it justifies Stein’s method, below.
It has apparently been extended to elliptical distributions and exponential families.
Multivariate? Why, yes please. The following lemma of Meckes (2006) gives a second-order characterizing operator for the Gaussian distribution on \(\mathbb{R}^{k}\):
For \(f \in C^{1}\left(\mathbb{R}^{k}\right)\), define the gradient of \(f\) by \(\nabla f(x)=\left(\frac{\partial f}{\partial x_{1}}(x), \ldots, \frac{\partial f}{\partial x_{k}}(x)\right)^{t}\). Define the Laplacian of \(f\) by \(\Delta f(x)=\sum_{i=1}^{k} \frac{\partial^{2} f}{\partial x_{i}^{2}}(x)\). Now, let \(Z \sim \mathcal{N}(0_k, \mathrm{I}_k)\).
- If \(f: \mathbb{R}^{k} \rightarrow \mathbb{R}\) is two times continuously differentiable and compactly supported, then \[ \mathbb{E}[\Delta f(Z)-Z \cdot \nabla f(Z)]=0 \]
- If \(Y \in \mathbb{R}^{k}\) is a random vector such that \[ \mathbb{E}[\Delta f(Y)-Y \cdot \nabla f(Y)]=0 \] for every \(f \in C^{2}\left(\mathbb{R}^{k}\right)\), then \(\mathcal{L}(Y)=\mathcal{L}(Z) .\)
- If \(g \in C_{o}^{\infty}\left(\mathbb{R}^{k}\right)\), then the function \[ U_{o} g(x):=\int_{0}^{1} \frac{1}{2 t}[\mathbb{E} g(\sqrt{t} x+\sqrt{1-t} Z)-\mathbb{E} g(Z)] d t \] is a solution to the differential equation \[ \Delta h(x)-x \cdot \nabla h(x)=g(x)-\mathbb{E} g(Z) \]
Poisson
a.k.a. Stein-Chen. \[\mathcal{A}_{o} f(k)=\lambda f(k+1)-k f(k)\]
Markov processes
TBD; relation to infinitesimal generators? See perhaps Schoutens (2001).
Stein’s method via exchangeable pairs
Meckes (2009) summarises:
Heuristically, the univariate method of exchangeable pairs goes as follows. Let \(W\) be a random variable conjectured to be approximately Gaussian; assume that \(\mathbb{E} W=0\) and \(\mathbb{E} W^{2}=1 .\) From \(W,\) construct a new random variable \(W^{\prime}\) such that the pair \(\left(W, W^{\prime}\right)\) has the same distribution as \(\left(W^{\prime}, W\right) .\) This is usually done by making a “small random change” in \(W\), so that \(W\) and \(W^{\prime}\) are close. Let \(\Delta=W^{\prime}-W\). If it can be verified that there is a \(\lambda>0\) such that \[ \begin{aligned} \mathbb{E}[\Delta \mid W]=-\lambda W+E_{1} \\ \mathbb{E}\left[\Delta^{2} \mid W\right]=2 \lambda+E_{2} \\ \mathbb{E}|\Delta|^{3}=E_{3} \end{aligned} \] with the random quantities \(E_{1}, E_{2}\) and the deterministic quantity \(E_{3}\) being small compared to \(\lambda,\) then \(W\) is indeed approximately Gaussian, and its distance to Gaussian (in some metric) can be bounded in terms of the \(E_{i}\) and \(\lambda\).
This comes out very nicely where there are natural symmetries to exploit, e.g. in low-d projections.
Non-Gaussian Stein method
Steins method generalises to AFAICT any exponential distribution. TBD
Multivariate Gaussian Stein method
The work of Elizabeth Meckes (1980—2020) serves as the canonical introduction in the area for now, although she never wrote a textbook. Two foundational ones are Chatterjee and Meckes (2008) and Meckes (2009) and there is a kind of introductory user guide in Meckes (2012); The examples are mostly about random projections although the method is much more general. The exchangeable pairs are natural in projections though, you can just switch off your brain and turn the handle to produce results, or easier yet, a computer algebra system that can handle noncommutative algebra can do it for you.
If the papers are too dense, try this friendly lecture, Stein’s Method — The last gadget under the hood.
Stein discrepancy
A probability metric based on something like “how well this distribution satisfies Stein’s lemma”, I think?
No comments yet. Why not leave one?