Fun with rotational symmetries



There are some related tricks that I used for functions with rotational symmetry, and functions on domains with rotational symmetry. Here is where I write them down to remember.

Throughout this I will use spheres and balls a lot, which I will need to make precise. Specifically the \(n\)-ball is \(B^{n}(r):=\{x:\|x\|\leq r\}\) (a solid ball of radius \(r\)). Its surface is the \(n-1\)-dimensional sphere \(S^{n-1}(r):=\{x:\|x\|=r\}\) (a thin shell of radius \(r\)). Usually I set \(r=1\) and suppress it.

There are a lot of ways that we can show that the \((n-1)\)-dimensional surface area of \(|S^{n-1}(1)|=\frac{2 \pi^{n / 2} }{ \Gamma(n / 2)}\), and the \(n\)-dimensional volume \(|B^{n}(1)|=\frac{2 \pi^{n / 2} }{ n \Gamma(n / 2)}\). One of them is to use the polynomial integration rules, below.

Radial functions

A function \(g: \mathbb{R}^{d}/ {0} \rightarrow \mathbb{R}\) is radial if there is a function \(k : \mathbb{R}^+ \rightarrow \mathbb{R}\) such that \[g(x)=k(\|x\|),\,x\in\mathbb{R}^d/\{0\}.\] I do not really like this terminology, ‘radial’, but I will concede that it is shorter than rotationally symmetric.

Put another way, consider the \(n\)-dimensional polar coordinates representation of a vector, which is unique for non-null vectors: \[ x=r x^{\prime}, \quad \text { where } \quad r=\|x\| \quad \text { and } \quad x'=\frac{x}{\|x\|}. \] (Equivalently, \(x'\in S^{n-1}(1)\).) Radial functions are those which do not depend upon “angle” vector \(x'\).

Put another way again, let \(R\) be an arbitrary rotation matrix. If \(g\) is radial, \(g(Rx)=g(x)\).

These functions are important in, e.g. spherical distributions.

In dot-product kernels

Radial functions are connected to dot product kernels, in that dot product kernels have rotational symmetries in their arguments, i.e. \(k(x,x)=k(Rx,Rx).\) Working out whether a function with such symmetries is dot-product kernel, i.e. that it is [positive definite](./kernel_methods.html, is not trivial. Smola, Óvári, and Williamson (2000) find rules for covariance constrained to a sphere based on a Legendre basis decomposition. Alternatively you can use the constructive approach of the Schaback-Wu transform algebra which promises to preserve positive-definite-ness under certain operations. Both these approaches get quite onerous except in special cases.

🏗️

Polynomial integrals on radially symmetric domains

I may have found this via John D. Cook. Even if not, go read his blog.

Suppose we have a polynomial function that we wish to integrate over a ball or a sphere. Folland (2001) is an elementary introduction to integrating functions over a sphere. He explains: Let \(\sigma\) denote the \((n-1)\) -dimensional surface measure on \(S^{n-1}.\) Our object is to compute \(\int_{S^{n-1}} p(x) d \sigma\) where \(p\) is a polynomial in the elements of \(x=[x_1\, x_2\, \dots \, x_n]^\top\). For this it suffices to consider the case where \(p\) is a monomial, \[ p(x)=x_{1}^{a_{1}} x_{2}^{a_{2}} \cdots x_{n}^{a_{n}} \quad\left(a_{1}, \ldots, a_{n} \in\{0,1,2, \ldots\}\right) \]

Let \(b_{j}=\frac{1}{2}\left(a_{j}+1\right) .\) Then, he shows, \[ \int_{S^{n-1}} p d \sigma=\left\{\begin{array}{ll} 0 & \text { if some } a_{j} \text { is odd, } \\ \frac{2 \Gamma\left(b_{1}\right) \Gamma\left(b_{2}\right) \cdots \Gamma\left(B^{n}\right)}{\Gamma\left(b_{1}+b_{2}+\cdots+B^{n}\right)} & \text { if all } a_{j} \text { are even. } \end{array}\right. \] Moreover, letting \(\mu\) denote the standard Lebesgue measure, \[ \int_{B^{n}} p d \mu = \frac{\int_{S^{n-1}} p d \sigma}{n+ \sum_j a_j}. \]

Uses: This is a good mnemonic for the volume of a ball or sphere when we can find by setting \(p\equiv1.\) In a similar vein, Baker (1997) uses polynomial approximation to some a non-trivial theorems over spheres.

What if we want to integrate a function \(F\) that is radial over the sphere? Then we know that it will be given by some \[ f(x)=\sum_{j=0,\text{even}}^{\infty} c_j \|x\|_2^{j}. \] It is not immediately obvious how to approximate it with a polynomial, since there is a hidden square root in that \(\|\cdot\|_2\), and indeed it could be rough and possess no decent polynomial approximation. Things look more hopeful if it can be expressed in terms of even powers of that norm, \[ f(x)=\sum_{j=0,\text{even}}^{\infty} c_j \|x\|_2^{j}, \] although things are going to get messy with those cross terms; there will be some multinomial coefficient nonsense. Alternatively we could give up on radial functions per se and consider the class of even functions invariant to permutations of the axes, \[ f(x)=\sum_{j=0,\text{even}}^{\infty} c_j \sum_{k=1}^n x_k^{j}. \] I would be surprised if these had any sensible universal approximation qualities, but they can probably be used to bound some non-trivial radial functions or something. What can we say about those?

\[ \begin{aligned} \int_{S^{n-1}} F d \sigma &= \sum_{j=0,\text{even}}^{\infty} c_j \frac{ 2 \Gamma\left(\frac{j+1}{2}\right)^n }{ \Gamma\left(n\frac{j+1}{2}\right) }\\ \int_{B^{n}} F d \mu &= \sum_{j=0,\text{even}}^{\infty} c_j \frac{ 2 \Gamma\left(\frac{j+1}{2}\right)^n }{ n\Gamma\left(n\frac{j+1}{2}\right)\left(1+\frac{j+1}{2}\right) }\\ \int_{S^{n-1}} F d \sigma -\int_{B^{n}} F d \mu &= \sum_{j=0,\text{even}}^{\infty} 2c_j \Gamma\left(\frac{j+1}{2}\right)^n \left(\frac{ 1+\frac{j+1}{2} - 1/n }{ \Gamma\left(n\frac{j+1}{2}\right)\left(1+\frac{j+1}{2}\right) }\right) \end{aligned} \]

This implies that the integrals over ball and sphere both go to zero as \(n\) increases; which, now that I think about it I can persuade myself makes sense since we are increasing the order of the polynomial by 2 each time we increase \(n\); eventually this will be an extremely high order polynomial that is very close to 0 on the interior of the unit ball. So, this class of polynomial functions is not, I think, that useful in high dimensions.

Integrating radial function on rotationally symmetric domains

Suppose we have an arbitrary function \(f\) on a rotationally symmetric domain; what can we say about that? Firstly, if the domain is \(S^{n-1}(r)\), then it is easy; Let \(e\) be an arbitrary unit vector. Then on the sphere, \[\begin{aligned} \int_{S^{n-1}} f(e) d \sigma &=f(e)|S^{n-1}|\\ &=f(e)\frac{2 \pi^{n / 2} }{ \Gamma(n / 2)} \end{aligned}\] since it is by construction constant on the surface of the sphere.

On the ball it is slightly more complicated, \[\int_{B^{n-1}} f(x) d\mu=\int_0^1 f(eu) u^{1/n} du.\]

The rationale for this latter one is given in the next section, although I should probably clarify at some point. Anyway, this is essentially a univariate integral, you will note.

What can we say about these integrals? For one \[\begin{aligned} \int_{B^{n-1}} f d\mu - \int_{S^{n-1}} f d \sigma &=\int_0^1 f(eu) u^{1/n} du - f(e)|S^{n-1}|\\ &=\int_0^1 f(eu) u^{1/n} du - \left.\frac{d}{du}\left(\int_0^uf(ue) du\right)\right|_{u=1}.\\ &=\text{TBC} \end{aligned}\]

I might come back to this and talk about something about rate of growth of \(f(ue)\) near \(u=1,\) but I think I can leave that for a moment. Or, actually, how about I consider functions which, if they bound \(e\mapsto f(ue),\) give a bound of the divergence of the integral of \(f\) over the ball from \(f\) over the sphere?

Here is an easy one. Suppose \(0\leq f(ue)\leq C u^{-1/n},\) for example, for some constant \(C.\) Then \[\begin{aligned} \int_{B^{n-1}} f(x) d\mu &=\int_0^1 f(eu) u^{1/n} du\\ &\leq C\int_0^1 u^{1/n} u^{-1/n} du\\ &\leq C \end{aligned}\]

Let us look at some bounding curves for various values of \(n\).

Unintuitively (for me), the functions need to be more tightly controlled to keep that integral bounded in higher dimensions. Ultimately it approaches a constant \(f\equiv1\). There is always a region around \(u=0\) where the integrand can grow arbitrarily large, but this region grows smaller as \(n\) increases.

Plot[{ u^(-1/2), u^(-1/3), u^(-1/4), u^(-1/5)}, {u, 0, 1},
 PlotTheme -> "Web", 
 PlotLabels -> 
  Placed[Automatic, {{0.6, Above}, 0.2, {0.2, Below}, {0.1, Below}}]]

Generating random points on balls and spheres

How to generate uniformly random points on n-spheres and in n-balls lists a few methods.

Of use to me is the Barthe et al. (2005) method, (which can be generalised to balls that are based on arbitrary \(L_p\) distances, not just \(L_2\) as here): \[ \frac{\left(X_{1}, \ldots, X_{n}\right)}{\sqrt{Y+\sum_{i=1}^{n} X_{i}^{2}}} \] Here each \(X_i\) is a standard Gaussian and \(Y\) is an exponential with mean \(1/2\).

From that same page we learn that for \(U\sim \operatorname{Unif}([0,1]),\) \(U^{1/n}\simeq \|X\|_2\) if \(X \sim \operatorname{Unif}(B^{n})\) which is an alternative statement of the sphere integral formula above.

Directional statistics

Apparently a whole field? See Pewsey and García-Portugués (2020).

Random projections

Closely related. See low-dimensional projections.

Transforms

How to work with radial functions.

Hankel transforms

A classic transform for dealing with general radial functions: \[(\mathcal{H}_{\nu }k)(s):=\int _{0}^{\infty }k(r)J_{\nu }(sr)\,r\,\mathrm{d} r.\] Nearly simple. Easy in special cases. Otherwise horrible. TBC.

Integration algebra

So here is a weird rabbit hole I went down; it concerns a cute algebra over radial function integrals that turns out to be not that useful for the kinds of problems you face, but comes out very nicely if you, e.g. have particular function structures, or want to know that you have preserved positive-definiteness in your functions under some operations

Here I am going to try to understand (Robert Schaback and Wu 1996), who handle the multivariate Fourier transforms and convolutions of radial functions through univariate integrals, which are a kind of warped Hankel transform. This is a good trick if it works, because this special case is relevant to, e.g. isotropic stationary kernels. They tweak the definition of the radial functions. Specifically, they call function \(g: \mathbb{R}^{d}/ {0} \rightarrow \mathbb{R}\) is radial if there is a function \(f: \mathbb{R}^+ \rightarrow \mathbb{R}\) such that \[g(x)=f(\|x\|_2^2/2),\,x\in\mathbb{R}^d/\{0\}.\] This relates to the classic version by \(k(\sqrt{2s})=f(s).\)

(Robert Schaback and Wu 1996) is one of those articles where the notation is occasionally amiguous clear and it would have been useful to mark which variables are vectors and which scalars, and overloading of definitions. Also they recycle function names: watch out for \(f,\) \(g\) and \(I\) doing double duty. They use the following convention for a Fourier transform: \[\mathcal{F}_{d}g(\omega) := \hat{g}(\omega):=(2 \pi)^{-d / 2} \int_{\mathbb{R}^{d}} g(x) \mathrm{e}^{-\mathrm{i} \omega^{\top} x} \mathrm{~d} x\] and \[\mathcal{F}^{-1}_{d}\check{g}(x):=(2 \pi)^{-d / 2} \int_{\mathbb{R}^{d}} g(\omega) \mathrm{e}^{+\mathrm{i} \omega^{\top} x} \mathrm{~d}(t)\] for \(g \in L_{1}\left(\mathbb{R}^{d}\right).\)

Now if \(g(x)=f\left(\frac{1}{2}\|x\|^{2}\right)\) is a radial function, then the \(d\)-variate Fourier transform is \[\begin{aligned} \hat{g}(\omega) &=\|\omega\|_{2}^{-(d-2)/2} \int_{0}^{\infty} f\left(\frac{1}{2} s^{2}\right) s^{d / 2} J_{(d-2)/2}\left(s \cdot\|\omega\|_{2}\right) \mathrm{d} s \\ &=\int_{0}^{\infty} f\left(\frac{1}{2} s^{2}\right)\left(\frac{1}{2} s^{2}\right)^{(d-2)/ 2}\left(\frac{1}{2} s \cdot\|\omega\|_{2}\right)^{(d-2) / 2} J_{(d-2) / 2}\left(s \cdot\|\omega\|_{2}\right) s \mathrm{~d} s \\ &=\int_{0}^{\infty} f\left(\frac{1}{2} s^{2}\right)\left(\frac{1}{2} s^{2}\right)^{(d-2) / 2} H_{(d-2)/ 2}\left(\frac{1}{2} s^{2} \cdot \frac{1}{2}\|\omega\|_{2}^{2}\right) s \mathrm{~d} s \end{aligned}\] with the functions \(J_{\nu}\) and \(H_{r}\) defined by \[\left(\frac{1}{2} z\right)^{-\nu} J_{\nu}(z)=H_{\nu}\left(\frac{1}{4} z^{2}\right)=\sum_{k=0}^{\infty} \frac{\left(-z^{2} / 4\right)^{k}}{k ! \Gamma(k+\nu+1)}=\frac{F_{1}\left(\nu+1 ;-z^{2} / 4\right)}{\Gamma(\nu+1)}\] for \(\nu>-1\). (What on earth do they mean by the two argument form \(F_1(\cdot; \cdot)?\) Is that a 1st-order Hankel transform?) If we substitute \(t=\frac{1}{2} s^{2},\) we find \[\begin{aligned} \hat{g}(\omega)&=\int_{0}^{\infty} f(t) t^{(d-2) / 2} H_{(d-2)/2}\left(t \cdot \frac{1}{2}\|\omega\|^{2}\right) \mathrm{d} t \\ &=:\left(F_{\frac{d-2}{2}} f\right)\left(\|\omega\|^{2} / 2\right) \end{aligned}\] with the general operator \[\begin{aligned} \left(F_{\nu} f\right)(r) &:=\int_{0}^{\infty} f(t) t^{\nu} H_{\nu}(t r) \mathrm{d} t. \end{aligned}\]

\(F_{\frac{d-2}{2}}\) is an operator giving the 1-dimensional representation of the \(d\)-dimensional radial Fourier transform of some radial function \(g(x)=f(\|x\|_2^2/2)\) in terms of the radial parameterization \(f\). Note that this parameterization in terms of squared radius is useful in making the mathematics come out nicely, but it is not longer very much like a Fourier transform. Integrating or differentiating with respect to \(r^2\) (which we can do easily) requires some chain rule usage to interpret in the original space, and we no longer have nice things like Wiener-Khintchin or Bochner theorems with respect to this Fourier-like transform. However, if we can use its various nice properties we can possibly return to the actual Fourier transform and extract the information we want.

\(J_{\nu}\) is the Bessel function of the first kind. What do we call the following? \[\begin{aligned} H_{\nu}:s &\mapsto \sum_{k=0}^{\infty} \frac{\left(-s\right)^{k}}{k ! \Gamma(k+\nu+1)}\\ &=\left(\frac{1}{\sqrt{s}}\right)^{\nu}J_{\nu}(2\sqrt{s}).\end{aligned}\] I do not know, but it is essential to this theory, since only things which integrate nicely with \(H_{\nu}\) are tractable in this theory. We have integrals like this: For \(\nu>\mu>-1\) and all \(r, s>0\) we have \[\left(F_{\mu} H_{\nu}(s)\right)(r)=\frac{s^{-\nu}(s-r)_{+}^{\nu-\mu-1}}{\Gamma(\nu-\mu)}.\] Now, that does not quite induce a (warped) Hankel transform because of the \(\left(\frac{1}{\sqrt{s}}\right)^{\nu}\) term but I don’t think that changes the orthogonality of the basis functions, so possibly we can still use a Hankel transform to calculate an approximant to \(f(\sqrt{2s})\) then transforming it

So, in \(d\) dimensions, this makes radial functions can be made from \(H_{(d-2)/2}(s)\). Upon inspection, not many familiar things can be made out of these \(H_{\nu}.\) \(f(r)=\mathbb{1}\{S\}(r)\) is one; \(f(r)=\exp(-r)\) is another. The others are all odd and contrived or too long to even write down, as far as I can see. Possibly approximations in terms of \(H\) functions would be useful? Up to a warp of the argument, that looks nearly like a Hankel transform.

Comparing it with the Hankel transform \[\begin{aligned} (\mathcal{H}_{\nu }f)(r) &=\int _{0}^{\infty }f(t)tJ_{\nu }(tr)\mathrm{d} t\end{aligned}\]

With this convention, and the symmetry of radial functions, we get \[F^{-1}_{\nu}=F_{\nu}.\] That is, the \(F\) pseudo-Fourier transform is its own inverse. Seems weird, though because of the \(r^2\) term, and the Fourier transform is already close to its own inverse for \(r\)-functions, but if you squint you can imagine this following from the analogous property of the kinda-similar Hankel transforms.

Let \(\nu>\mu>-1.\) Then for all functions \(f: \mathbb{R}_{>0} \rightarrow \mathbb{R}\) with \[f(t) \cdot t^{\nu-\mu-1 / 2} \in L_{1}\left(\mathbb{R}^{+}\right)\] it follows that \[F_{\mu} \circ F_{v}=I_{v-\mu}\] where the integral operator \(I_{\alpha}\) is given by \[\left(I_{\alpha} f\right)(r)=\int_{0}^{\infty} f(s) \frac{(s-r)_{+}^{\alpha-1}}{\Gamma(\alpha)} \mathrm{d} s, \quad r>0, \quad \alpha>0.\] Here we have used the truncated power function \[x_{+}^{n}={\begin{cases}x^{n}&:\ x>0\\0&:\ x\leq 0.\end{cases}}\] It can be extended to \(\alpha\leq 0\) with some legwork.

But what is this operator \(I_{\alpha}\)? Some special cases/extended definitions are of interest: \[\begin{aligned} \left(I_{0} f\right)(r) &:=f(r), & & f \in C\left(\mathbb{R}_{>0}\right) \\ \left(I_{-1} f\right)(r) &:=-f^{\prime}(r), & & f \in C^{1}\left(\mathbb{R}_{>0}\right)\\ I_{-n} &:=(I_{-1})^{\circ n}, & & n>0\\ I_{-\alpha} &:=I_{n-\alpha} \circ I_{-n} & & 0<\alpha \leq n=\lceil\alpha\rceil\end{aligned}\] In general \(I_{\alpha}\) is, up to a sign change, \(\alpha\)-fold integration. Note that \(\alpha\) is not in fact restricted to integers, and we have for free all fractional derivatives and integrals encoded in its values. Neat.

If something can be made to come out nicely with respect to this integral operator \(I_{\alpha},\) especially \(\alpha\in\{-1,1/2,1\}\) then all our calculations come out easy.

We have a sweet algebra over these \(I_{\alpha}\) and \(F_{\nu}\) and their interactions: \[I_{\alpha} \circ I_{\beta} = I_{-\alpha}\circ F_{\nu}.\] Also \[F_{\nu} \circ I_{\alpha} = I_{\alpha+\beta}.\] Or, rearranging, \[F_{\mu} = I_{\mu-\nu} F_{\nu} = F_{\nu} I_{\mu-\nu}.\]

We have fixed points \[I_{\alpha}(\mathrm{e}^{-r}) = \mathrm{e}^{-r}\] and \[F_{\nu}(\mathrm{e}^{-r}) = \mathrm{e}^{-r}.\]

We can use these formulae to calculate multidimensional radial Fourier transforms. With \(\mathcal{F}_{d}:=F_{\frac{d-2}{2}},\) the \(d\) variate Fourier transform written as a univariate operator on radial functions, we find \[\mathcal{F}_{n}=I_{(m-n) / 2} \mathcal{F}_{m}=\mathcal{F}_{m} I_{(n-m) / 2}\] for all space dimensions \(m, n \geq 1 .\) Recursion through dimensions can be done in steps of two via \[\mathcal{F}_{m+2}=I_{-1} \mathcal{F}_{m}=\mathcal{F}_{m} I_{1}\] and in steps of one by \[\mathcal{F}_{m+1}=I_{-1 / 2} \mathcal{F}_{m}=\mathcal{F}_{m} I_{1 / 2}\]

we have some tools for convolving multivariate radial functions by considering their univariate representations. Consider the convolution operator on radial functions \[C_{\nu}: \mathcal{S} \times \mathcal{S} \rightarrow \mathcal{S}\] defined by \[C_{\nu}(f, g)=F_{\nu}\left(\left(F_{\nu} f\right) \cdot\left(F_{\nu} g\right)\right).\] For \(\nu=\frac{d-2}{2}\) it coincides with the operator that takes \(d\)-variate convolutions of radial functions and rewrites the result in radial form. For \(\nu, \mu \in \mathbb{R}\) we have \[C_{\nu}(f, g)=I_{\mu-\nu} C_{\mu}\left(I_{\nu-\mu} f, I_{\nu-\mu} g\right)\] for all \(f, g \in \mathcal{S}.\)

For dimensions \(d \geq 1\) we have \[C_{\frac{d-2}{2}}(f, g)=I_{\frac{1-d}{2}} C_{-\frac{1}{2}}\left(I_{\frac{d-1}{2}} f, I_{\frac{d-1}{2}} g\right).\] If \(d\) is odd, the \(d\) variate convolution of radial functions becomes a derivative of a univariate convolution of integrals of \(f\) and \(g\). For instance, \[\begin{aligned} f *_{3} g &=I_{-1}\left(\left(I_{1} f\right) *_{1}\left(I_{1} g\right)\right) \\ &=-\frac{d}{d r}\left(\left(\int_{r}^{\infty} f\right) *_{1}\left(\int_{r}^{\infty} g\right)\right). \end{aligned}\]

For \(d\) even, to reduce a bivariate convolution to a univariate convolution, one needs the operations \[\left(I_{1 / 2} f\right)(r)=\int_{r}^{\infty} f(s) \frac{(s-r)^{-1 / 2}}{\Gamma(1 / 2)} \mathrm{d} s\] and the semi-derivative \[\left(I_{-1 / 2} f\right)(r)=\left(I_{1 / 2} I_{-1} f\right)(r)=-\int_{r}^{\infty} f^{\prime}(s) \frac{(s-r)^{-1 / 2}}{\Gamma(1 / 2)} \mathrm{d} s\]

Note that the operators \(I_{1}, I_{-1},\) and \(I_{1 / 2}\) are much easier to handle than the Hankel transforms \(F_{\mu}\) and \(\mathcal{F}_{m} .\) This allows simplified computations of Fourier transforms of multivariate radial functions, if the univariate Fourier transforms are known.

Now, how do we solve PDEs this way? Starting with some test function \(f_{0},\) we can define \[f_{\alpha}:=I_{\alpha} f_{0} \quad(\alpha \in \mathbb{R})\] and get a variety of integral or differential equations from application of the \(I_{\alpha}\) operators via the identities \[f_{\alpha+\beta}=I_{\beta} f_{\alpha}=I_{\alpha} f_{\beta}\] Furthermore, we can set \(g_{\nu}:=F_{\nu} f_{0}\) and get another series of equations \[\begin{array}{l} I_{\alpha} g_{\nu}=I_{\alpha} F_{\nu} f_{0}=F_{\nu-\alpha} f_{0}=g_{\nu-\alpha} \\ F_{\mu} g_{\nu}=F_{\mu} F_{\nu} f_{0}=I_{\nu-\mu} f_{0}=f_{\nu-\mu} \\ F_{\mu} f_{\alpha}=F_{\mu} I_{\alpha} f_{0}=F_{\mu+\alpha} f_{0}=g_{\mu+\alpha} \end{array}\]

For compactly supported functions, we proceed as follows: We now take the characteristic function \(f_{0}(r)=\chi_{[0,1]}(r)\) and get the truncated power function \[\left(I_{\alpha} f_{0}\right)(r)=\int_{0}^{1} \frac{(s-r)_{+}^{\alpha+1}}{\Gamma(\alpha)} d s=\frac{(1-r)_{+}^{\alpha}}{\Gamma(\alpha+1)}=f_{\alpha}(r), \quad \alpha>0\] Now we find \[f_{\alpha}=F_{\mu} H_{\nu}\] for \(\nu-\mu=\alpha+1, \nu>\mu>-1\) and \[F_{\mu} f_{\alpha}=H_{\mu+\alpha+1}\] for \(\alpha>0, \mu>-1 .\)

Was that useful mathematics or a shaggy dog story?

References

Baker, John A. 1997. “Integration Over Spheres and the Divergence Theorem for Balls.” The American Mathematical Monthly 104 (1): 36–47. https://doi.org/10.1080/00029890.1997.11990594.
Barthe, Franck, Olivier Guedon, Shahar Mendelson, and Assaf Naor. 2005. “A Probabilistic Approach to the Geometry of the \(\ell_p^n\)-Ball.” The Annals of Probability 33 (2). https://doi.org/10.1214/009117904000000874.
Bie, H. De, and F. Sommen. 2007. “Spherical Harmonics and Integration in Superspace.” Journal of Physics A: Mathematical and Theoretical 40 (26): 7193–7212. https://doi.org/10.1088/1751-8113/40/26/007.
Bowman, Frank. 2012. Introduction to Bessel Functions. Dover Publications. https://www.researchgate.net/profile/Vikash_Pandey5/post/Can-anyone-suggest-books-on-the-fundamental-understanding-of-bessel-functions-with-worked-examples/attachment/59d61e2079197b807797c85a/AS%3A276097904410638%401442838278612/download/Bowman_Bessel_Functions.pdf.
Buhmann, M. 2001. “A New Class of Radial Basis Functions with Compact Support.” Mathematics of Computation 70 (233): 307–18. https://doi.org/10.1090/S0025-5718-00-01251-5.
Cheng, Xiuyuan, and Amit Singer. 2013. “The Spectrum of Random Inner-Product Kernel Matrices.” Random Matrices: Theory and Applications 02 (04): 1350010. https://doi.org/10.1142/S201032631350010X.
Christensen, Jens Peter Reus. 1970. “On Some Measures Analogous to Haar Measure.” MATHEMATICA SCANDINAVICA 26 (June): 103–6. https://doi.org/10.7146/math.scand.a-10969.
Debeerst, Ruben, Mark van Hoeij, and Wolfram Koepf. 2008. “Solving Differential Equations in Terms of Bessel Functions.” In Proceedings of the Twenty-First International Symposium on Symbolic and Algebraic Computation - ISSAC ’08, 39. Linz/Hagenberg, Austria: ACM Press. https://doi.org/10.1145/1390768.1390777.
Defferrard, Michaël, Martino Milani, Frédérick Gusset, and Nathanaël Perraudin. 2020. “DeepSphere: A Graph-Based Spherical CNN.” arXiv:2012.15000 [cs, Stat], December. http://arxiv.org/abs/2012.15000.
Defferrard, Michaël, Nathanaël Perraudin, Tomasz Kacprzak, and Raphael Sgier. 2019. “DeepSphere: Towards an Equivariant Graph-Based Spherical CNN.” arXiv:1904.05146 [cs, Stat], April. http://arxiv.org/abs/1904.05146.
Dokmanic, I., and D. Petrinovic. 2010. “Convolution on the \(n\)-Sphere With Application to PDF Modeling.” IEEE Transactions on Signal Processing 58 (3): 1157–70. https://doi.org/10.1109/TSP.2009.2033329.
Dominici, Diego E., Peter M. W. Gill, and Taweetham Limpanuparb. 2012. “A Remarkable Identity Involving Bessel Functions.” Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 468 (2145): 2667–81. https://doi.org/10.1098/rspa.2011.0664.
El Karoui, Noureddine. 2010. “The Spectrum of Kernel Random Matrices.” The Annals of Statistics 38 (1). https://doi.org/10.1214/08-AOS648.
Folland, Gerald B. 2001. “How to Integrate A Polynomial Over A Sphere.” The American Mathematical Monthly 108 (5): 446–48. https://doi.org/10.1080/00029890.2001.11919774.
Görlich, E., C. Markett, and O. Stüpp. 1994. “Integral Formulas Associated with Products of Bessel Functions: A New Partial Differential Equation Approach.” Journal of Computational and Applied Mathematics 51 (2): 135–57. https://doi.org/10.1016/0377-0427(92)00011-W.
Grafakos, Loukas, and Gerald Teschl. 2013. “On Fourier Transforms of Radial Functions and Distributions.” Journal of Fourier Analysis and Applications 19 (1): 167–79. https://doi.org/10.1007/s00041-012-9242-5.
Haimo, Deborah Tepper. 1964. “Integral Equations Associated with Hankel Convolution(s),” 46.
Kausel, Eduardo, and Mirza M. Irfan Baig. 2012. “Laplace Transform of Products of Bessel Functions: A Visitation of Earlier Formulas.” Quarterly of Applied Mathematics 70 (1): 77–97. https://doi.org/10.1090/S0033-569X-2011-01239-2.
Maširević, Dragana Jankov, and Tibor K. Pogány. 2019. “Integral Representations for Products of Two Bessel or Modified Bessel Functions.” Mathematics 7 (10): 978. https://doi.org/10.3390/math7100978.
Meckes, Elizabeth. 2006. “An Infinitesimal Version of Stein’s Method of Exchangeable Pairs.”
———. 2009. “On Stein’s Method for Multivariate Normal Approximation.” In High Dimensional Probability V: The Luminy Volume, 153–78. Beachwood, Ohio, USA: Institute of Mathematical Statistics. https://doi.org/10.1214/09-IMSCOLL511.
———. 2012. “Projections of Probability Distributions: A Measure-Theoretic Dvoretzky Theorem.” In Geometric Aspects of Functional Analysis: Israel Seminar 2006–2010, edited by Bo’az Klartag, Shahar Mendelson, and Vitali D. Milman, 317–26. Lecture Notes in Mathematics. Berlin, Heidelberg: Springer. https://doi.org/10.1007/978-3-642-29849-3_18.
Pewsey, Arthur, and Eduardo García-Portugués. 2020. “Recent Advances in Directional Statistics.” arXiv:2005.06889 [stat], September. http://arxiv.org/abs/2005.06889.
Pinsky, M. A., N. K. Stanton, and P. E. Trapa. 1993. “Fourier Series of Radial Functions in Several Variables.” Journal of Functional Analysis 116 (1): 111–32. https://doi.org/10.1006/jfan.1993.1106.
Potts, Daniel, and Niel Van Buggenhout. 2017. “Fourier Extension and Sampling on the Sphere.” In 2017 International Conference on Sampling Theory and Applications (SampTA), 82–86. Tallin, Estonia: IEEE. https://doi.org/10.1109/SAMPTA.2017.8024365.
Schaback, R. 2007. “A Practical Guide to Radial Basis Functions,” 58.
Schaback, Robert, and Holger Wendland. 1999. “Using Compactly Supported Radial Basis Functions To Solve Partial Differential Equations,” 14.
Schaback, Robert, and Z. Wu. 1996. “Operators on Radial Functions.” Journal of Computational and Applied Mathematics 73 (1): 257–70. https://doi.org/10.1016/0377-0427(96)00047-7.
Smola, Alex J., Zoltán L. Óvári, and Robert C. Williamson. 2000. “Regularization with Dot-Product Kernels.” In Proceedings of the 13th International Conference on Neural Information Processing Systems, 290–96. NIPS’00. Cambridge, MA, USA: MIT Press. https://openreview.net/forum?id=ryXbEvbdWS.
Stam, A. J. 1982. “Limit Theorems for Uniform Distributions on Spheres in High-Dimensional Euclidean Spaces.” Journal of Applied Probability 19 (1): 221–28. https://doi.org/10.2307/3213932.
Tabrizi, Mehdi, and Ebrahim Maleki Harsini. 2016. “On the Relation Between Airy Integral and Bessel Functions Revisited.” arXiv:1605.03369 [math-Ph], May. http://arxiv.org/abs/1605.03369.
Trask, Nathaniel, Huaiqian You, Yue Yu, and Michael L. Parks. 2018. “An asymptotically compatible meshfree quadrature rule for nonlocal problems with applications to peridynamics.” Computer Methods in Applied Mechanics and Engineering 343 (September). https://doi.org/10.1016/j.cma.2018.08.016.
Vembu, S. 1961. “Fourier Transformation of the n -Dimensional Radial Delta Function.” The Quarterly Journal of Mathematics 12 (1): 165–68. https://doi.org/10.1093/qmath/12.1.165.
Wathen, Andrew J., and Shengxin Zhu. 2015. “On Spectral Distribution of Kernel Matrices Related to Radial Basis Functions.” Numerical Algorithms 70 (4): 709–26. https://doi.org/10.1007/s11075-015-9970-0.
Wendland, Holger. 1999. “On the Smoothness of Positive Definite and Radial Functions.” Journal of Computational and Applied Mathematics 101 (1): 177–88. https://doi.org/10.1016/S0377-0427(98)00218-0.
Xu, Yuan. 2001. “Orthogonal Polynomials and Cubature Formulae on Balls, Simplices, and Spheres.” Journal of Computational and Applied Mathematics, Numerical Analysis 2000. Vol. V: Quadrature and Orthogonal Polynomials, 127 (1): 349–68. https://doi.org/10.1016/S0377-0427(00)00504-5.
———. 2004. “Polynomial Interpolation on the Unit Sphere and on the Unit Ball.” Advances in Computational Mathematics 20 (1): 247–60. https://doi.org/10.1023/A:1025851005416.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.