One way I can get at the confusing behaviours of high dimensional distributions is to instead look at low dimensional projections of them. If I have a (possibly fixed) data matrix and a random dimensional projection, what distribution does the projection have?

This idea pertains to many others: matrix factorisations, restricted isometry properties, Riesz bases, randomised regression, compressed sensing. You could also consider these results as arising from/resulting in certain structured random matrices.

## Tutorials

There is a confusing note soup, sorry. You might find it better to read a coherent overview such as Meckes’ lecture slides which include a lot of important recent developments, many of which she invented:

- Random Unitary Matrices and Friends
- The Topology of Random Spaces
- Linear Projections of High-Dimensional Data.

Related: Weird slicing problems in convex geometry. For a theoretical background as to how that relates, see Guédon (2014).

## Inner products

Djalil Chafaï introduces the The Funk-Hecke formula, also mentioned under isotropic RVs, which gives us a formula for a particularly simple case, of unit-norm RVs:

… if \(X\) is a random vector of \(\mathbb{R}^{n}\) uniformly distributed on \(\mathbb{S}^{n-1}\) then for all \(y \in \mathbb{S}^{n-1}\), the law of \(X \cdot y\) has density \[ t \in[-1,1] \mapsto \frac{\Gamma\left(\frac{n}{2}\right)}{\sqrt{\pi} \Gamma\left(\frac{n-1}{2}\right)}\left(1-t^{2}\right)^{\frac{n-3}{2}} \] This law does not depend on the choice of \(y \in \mathbb{S}^{n-1}\). It is symmetric in the sense that \(X \cdot y\) and \(-(X \cdot y)\) have same law. The law of \(|X \cdot y|\) is the image of the law \(\operatorname{Beta}\left(\frac{3}{2}, \frac{n-1}{2}\right)\) by the map \(u \mapsto \sqrt{u}\). The law of \(X \cdot y\) is

- if \(n=2\) : an arcsine law,
- if \(n=3\) : a uniform law (Archimedes principle),
- if \(n=4\) : a semicircle law.

whuber asserts that \((X \cdot y+1)\sim\operatorname{Beta}((d-1)/2,(d-1)/2),\) and \(\operatorname{Var}(X \cdot y)=1/d.\)

## Random projections are kinda Gaussian

More generally things are not so exact. But they are still reasonably nice, in that there are lots of tasty limit theorems with nice regular behaviour.

A classic introductory concept: *Diaconis-Freedman effect*.
Diaconis and Freedman (1984) show that (under some mild omitted conditions),
\[
\left\{x_{1}, \ldots, x_{n}\right\} \subseteq \mathbb{R}^{d}
\]
is a data set (possibly deterministic with no assumption on generating process), \(\theta\) is a uniform random point in the sphere \(\mathbb{S}^{d-1},\) and
\[
\mu_{x}^{\theta}:=\frac{1}{n} \sum_{i=1}^{n} \delta_{\left\langle x_{i}, \theta\right\rangle}
\]
is the empirical measure of the projection of the \(x_{i}\) onto \(\theta\), then as \(n, d \rightarrow \infty,\) the measures \(\mu_{x}^{\theta}\) tend to \(\mathcal{N}\left(0, \sigma^{2}\right)\) weakly in probability.
This succinct statement is modeled on Elizabeth Meckes'.

A lesson is that even non-Gaussian, non-independent data can become nearly i.i.d. Gaussian in low dimensional projection, as Dasgupta, Hsu, and Verma (2006) argue in their introduction.

This has been taken to incredible depth in the work of Elizabeth Meckes 1980—2020 whose papers serve as the canonical textbook in the area for now. Two foundational ones are Chatterjee and Meckes (2008) and E. Meckes (2009) and there is a kind of user guide in E. Meckes (2012b) which leverages Stein’s method a whole bunch.

## Random projections are distance preserving

What makes random embeddings go. The most famous result is the Johnson-Lindenstrauss lemma.

A simple proof of that is given by Dasgupta and Gupta (2003)

“Locality-Sensitive Hashing (LSH) is an algorithm for solving the approximate or exact Near Neighbor Search in high dimensional spaces.” (Is this random even?)

John Myles White explains why the Johnson-Lindenstrauss lemma is worth knowing

Bob Durrant: The Unreasonable Effectiveness of Random Projections in Computer Science

## Projection statistics

Another key phrase we can look for is probability on the *Stiefel manifold*, which is a generalization of a familiar concept from random orthonormal matrices.
Stiefel manifolds generalise an orthonormal matrix because they can map between spaces of different dimension.
Formally, the Stiefel manifold \(V_{k, m}\) is the space of \(k\) frames in the \(m\) -dimensional real Euclidean space \(R^{m},\) represented by the set of \(m \times k\) matrices \(X\) such that \(X^{\prime} X=I_{k},\) where \(I_{k}\) is the \(k \times k\) identity matrix.
There are some interesting cases in low dimensional projections served by \(k\ll m\) especially \(k=1.\)

Cool results in this domain are, e.g. Chikuse and 筑瀬 (2003); E. S. Meckes and Meckes (2013); E. Meckes (2012a); Stam (1982).

General projections results are in Dümbgen and Del Conte-Zerial (2013).

An important trick is the distribution of isotropic unit vectors.

Let \(Z=Z^{(q)}:=\left(Z_{1}, Z_{2}, \ldots, Z_{d}\right)\) be a random matrix in \(\mathbb{R}^{m \times k}\) with independent, standard Gaussian column vectors \(Z_{j} \in \mathbb{R}^{m} .\) Then \[ \Theta:=Z\left(Z^{\top} Z\right)^{-1 / 2}=Z/\|Z\|_2^2 \] has the desired distribution, and \[ \Theta=m^{-1 / 2} Z\left(I+O_{p}\left(m^{-1 / 2}\right)\right) \quad \text { as } m \rightarrow \infty. \]

Vershynin’s writing on a variety of hard high-dimensional probability results is pretty accessible: Vershynin (2015); Vershynin (2018). These bleed over into concentration results.

I wrote one of my own… TBD.

## Concentration theorems for projections

Many, e.g. Dasgupta, Hsu, and Verma (2006); Dümbgen and Del Conte-Zerial (2013); Gantert, Kim, and Ramanan (2017); Kim, Liao, and Ramanan (2020).

## References

*Journal of Computer and System Sciences*, Special Issue on PODS 2001, 66 (4): 671–87.

*Transactions of the American Mathematical Society*355 (12): 4723–35.

*Proceedings of the American Mathematical Society*105 (2): 397.

*Nonparametric Inference on Manifolds: With Applications to Shape Spaces*. Institute of Mathematical Statistics Monographs. Cambridge: Cambridge University Press.

*Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*, 245–50. KDD ’01. New York, NY, USA: ACM.

*The Annals of Probability*31 (1): 195–215.

*Geometric Aspects of Functional Analysis: Israel Seminar 2001-2002*, edited by Vitali D. Milman and Gideon Schechtman, 44–52. Lecture Notes in Mathematics. Berlin, Heidelberg: Springer.

*IEEE Transactions on Information Theory*52 (12): 5406–25.

*arXiv:math/0701464*, January.

*Statistics on Special Manifolds*. New York, NY: Springer New York.

*Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence*, 143–51. UAI’00. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.

*Random Structures & Algorithms*22 (1): 60–65.

*Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence*, 114–21. UAI’06. Arlington, Virginia, USA: AUAI Press.

*The Annals of Statistics*12 (3): 793–815.

*From Probability to Statistics and Back: High-Dimensional Models and Processes – A Festschrift in Honor of Jon A. Wellner*, January, 91–104.

*Journal of Functional Analysis*254 (8): 2275–93.

*arXiv:1001.0875 [Math]*, January.

*Advances in Neural Information Processing Systems*, 473–80.

*The Annals of Probability*45 (6B): 4419–76.

*ESAIM: Proceedings*44 (January): 47–60.

*The Annals of Statistics*21 (2): 867–89.

*Theory of Computing*8 (1): 321–50.

*Concentration, Functional Inequalities and Isoperimetry: International Workshop on Concentration, Functional Inequalities, and Isoperimetry, October 29-November 1, 2009, Florida Atlantic University, Boca Raton, Florida*. American Mathematical Soc.

*Proceedings of the Thirtieth Annual ACM Symposium on Theory of Computing*, 604–13. STOC ’98. New York, NY, USA: ACM.

*Geometric Aspects of Functional Analysis: Israel Seminar (GAFA) 2017-2019 Volume II*, edited by Bo’az Klartag and Emanuel Milman, 1–41. Lecture Notes in Mathematics. Cham: Springer International Publishing.

*Journal of Information and Optimization Sciences*17 (1): 177–84.

*arXiv:1912.13447 [Math]*, June.

*Inventiones Mathematicae*168 (1): 91–131.

*Journal of Functional Analysis*245 (1): 284–310.

*arXiv:1607.04331 [Cs, q-Bio, Stat]*, July.

*Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*, 287–96. KDD ’06. New York, NY, USA: ACM.

*High Dimensional Probability V: The Luminy Volume*, 153–78. Beachwood, Ohio, USA: Institute of Mathematical Statistics.

*Geometric Aspects of Functional Analysis: Israel Seminar 2006–2010*, edited by Bo’az Klartag, Shahar Mendelson, and Vitali D. Milman, 317–26. Lecture Notes in Mathematics. Berlin, Heidelberg: Springer.

*Journal of Theoretical Probability*25 (2): 333–52.

*Probability Theory and Related Fields*156 (1-2): 145–64.

*arXiv:math/0606073*, June.

*Self-Normalized Processes: Limit Theory and Statistical Applications*. Springer Science & Business Media.

*2017 IEEE International Symposium on Information Theory (ISIT)*, 3045–49.

*Electronic Communications in Probability*18 (none): 1–9.

*Geometric Aspects of Functional Analysis*, edited by Vitali D. Milman and Gideon Schechtman, 1910:271–95. Berlin, Heidelberg: Springer Berlin Heidelberg.

*Journal of Applied Probability*19 (1): 221–28.

*Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, Volume 2: Probability Theory*, January, 583–602.

*arXiv:1011.3027 [Cs, Math]*, November.

*Sampling Theory, a Renaissance: Compressive Sensing and Other Developments*, edited by Götz E. Pfander, 3–66. Applied and Numerical Harmonic Analysis. Cham: Springer International Publishing.

*High-Dimensional Probability: An Introduction with Applications in Data Science*. 1st ed. Cambridge University Press.

*Probability Theory and Related Fields*107 (3): 313–24.

*High-dimensional data analysis with low-dimensional models: Principles, computation, and applications*. S.l.: Cambridge University Press.

## No comments yet. Why not leave one?