# Margins of isotropic random unit vectors

This has been bothering me for days, so I post it here in order that someone else may not be so vexed.

Consider $$\mathbb{R}^d$$ with the $$L_2$$ norm, i.e. $$d$$-dimensional euclidean space.

Let’s say you wish to have a random variable, $$V$$ whose realisations are isotropically distributed unit-vectors (Say, to represent isotropic rotations, or for a null-distribution against which to check for isotropy.)

The simplest say of I know of generating such vectors is to take

$V \sim \mathcal{N}(\mathbf{0}_d, \mathbf{I}_d)$

That is, $$V$$ distributed by a multivariate Gaussian distribution with each component independent and of standard deviation 1, so if $$v_i$$ are i.i.d. with $$v_i \sim \mathcal{N}(0,1)$$, then

$V = \left[\begin{array}{c} v_1 \\ v_2 \\ \vdots \\ v_d \end{array} \right]$

By the rotational invariance of multivariate Gaussian distributions we know that this must be isotropic.

Now, to get unit-vectors we can simply normalise

$U \equiv \frac{V}{\left\| V \right\|}$

This much is 30 seconds of web-searching away from any of us. (And if you use the “hit-and-run” monte carlo sampler, this is your daily bread.)

The question is, what is the distribution of the axial component $$u_i$$, of $$U$$, along axis $$i$$, in the non-trivial case with $$d>1$$?

I spent a lot of time doing tedious integrals, before I realised there was an easy, calculus-free solution.

Let us consider, w.l.o.g., $$u_1$$. We know that the p.d.f of $$u_1$$ is even (i.e. has reflective symmetry) by construction. So it will suffice to find the positive half of the distribution function. Thus it will also be enough to find the p.d.f. of $$u_1^2$$.

Define

$V' \equiv \left[\begin{array}{c} v_2 \\ v_3 \\ \vdots \\ v_d \end{array} \right]$

i.e. $$V'$$ is $$V$$ with the first component of the vector excised. We have

$\begin{array}{cccc} & U & = & \frac{V}{\left\| V \right\|} \\ \Rightarrow & U^2 & = & \frac{V^2}{\left\| V \right\|^2} \\ \Rightarrow & u_1^2 & = & \frac{v_1^2}{\left\| V \right\|^2} \\ \Rightarrow & u_1^2 & = & \frac{v_1^2}{\left\| V' \right\|^2 + v_1^2}\\ \Rightarrow & \frac{1}{u_1^2} -1 & = & \frac{\left\| V' \right\|^2}{v_1^2}\\ \Rightarrow & \frac{\frac{1}{u_1^2} -1}{d-1} & = & \frac{\left\| V' \right\|^2 / (d-1)}{v_1^2} \end{array}$

Now, the R.H.S. has an $$\mathcal{F}(d-1,1)$$ distribution, as the ratio of i.i.d $$\chi^2$$ variables, and the work there has already been done for us by Snedecor. We call that R.H.S. $$G$$, where $$G \sim \mathcal{F}(d-1,1)$$. We write $$g$$ for the pdf. Then

$u_1^2 = \frac{1}{g(d-1)+1}$

Now we can construct the pdf of $$u_1$$. In fact, for my purposes I need the quantile/inverse-cdf function, so let’s do that. Call the quantile function of the $$\mathcal{F}(d-1,1)$$ distribution, $$H(x)$$ and the cdf of $$u_1$$, $$F(x)$$ (for ‘half’ and ‘full’).

Then we splice two symmetric copies of the function about the axis - $$F(x) = \mathrm{sgn}(2x-1) H(1-|2x-1|)$$ - and we’re done.

Here’s a plot of the behaviour of our distribution:

Notice that axial components tend to be long in $$\mathbb{R}^2$$, and short in more than 3 dimensions. And uniformly distributed in 3 dimensions. Intuitively, as the number of dimensions increases, the contribution of any one axis to the total length is likely to be smaller on average, because there are simply more of them.

And here’s the R code to generate the graph, most of which is plotting logic.

half.quantile <- function(x,d=3) {
g <- qf(x,d-1,1)
return(1/sqrt(g*(d-1)+1))
}
full_quantile <- function (x, d=3) {
x.scaled <- 2*x -1
res <- sign(x.scaled)*half.quantile(1-abs(x.scaled), d)
return(res)
}

pts <- seq(0,1,1/512)
dims <- seq(2,10)
ndims <- length(dims)
vals <- data.frame(outer(pts, dims, full.quantile))
dimnames <- paste(dims)
xrange <- c(0,1)
yrange <- c(-1,1)
library(RColorBrewer)
colors <- brewer.pal(ndims,"Spectral")

# set up the plot

plot(xrange, yrange, type="l", xlab="x",
ylab="quantile")

lty=1, title="Dimension")