Neural net kernels

September 16, 2019 — May 24, 2021

Hilbert space
kernel tricks
machine learning
metrics
probabilistic algorithms
signal processing
spheres
statistics
stochastic processes
Figure 1: How I imagine the hyperspherical regularity of an NN kernel.

Random infinite-width NN induce covariances which are nearly dot product kernels in the input parameters. Say we wish to compare the outputs given two input examples \(.\) They depend on the several dot products, \(\mathbf{x}^{\top} \mathbf{x}\), \(\mathbf{x}^{\top} \mathbf{y}\) and \(\mathbf{y}^{\top} \mathbf{y}\). Often it is convenient to discuss the angle \(\theta\) between the inputs: \[ \theta=\cos ^{-1}\left(\frac{\mathbf{x} ^{\top} \mathbf{y}}{\|\mathbf{x}\|\|\mathbf{y}\|}\right) \]

The classic result is that in a single layer wide-neural net, \[\begin{aligned} K(\mathbf{x}, \mathbf{y}) &= \mathbb{E}\big[ \psi(Z_x) \psi(Z_y) \big], \quad \text{ where} \\ \begin{pmatrix} Z_x \\ Z_y \end{pmatrix} &\sim \mathcal{N} \Bigg( \mathbf{0}, \underbrace{\begin{pmatrix} \mathbf{x}^\top \mathbf{x} & \mathbf{x}^\top \mathbf{y} \\ \mathbf{y}^\top \mathbf{x} & \mathbf{y}^\top \mathbf{y} \end{pmatrix}}_{:=\Sigma} \Bigg). \end{aligned}\] It is sometimes useful to note that \(\begin{pmatrix} Z_x \\ Z_y \end{pmatrix}\overset{d}{=} \operatorname{Chol}(\Sigma)\boldsymbol{Z}_1,\) where \(\boldsymbol{Z}_1\sim \mathcal{N} \Bigg( \mathbf{0}, \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \Bigg)\) and \(\operatorname{Chol}(\Sigma)= \begin{pmatrix} \|\mathbf{x}\| & \|\mathbf{y}\|\cos \theta \\ 0 & \|\mathbf{y}\|\sqrt{1-\cos^2 \theta} \end{pmatrix}.\)

These \(Z_{x}\) terms arise from the (appropriately scaled limit of) the random weight matrix \[\begin{aligned} Z_x &= \mathbf{W}^\top\mathbf{x} \\ Z_y &= \mathbf{W}^\top \mathbf{y}. \end{aligned}\] Now, define \[\begin{aligned} Z_{xi} :&= W_{i} x_{i}, \\ Z_{yj} :&= W_{j} y_{j}, \\ Z'_{xi} :&= W_i, \\ Z'_{yj} :&= W_j. \end{aligned}\] We have that \[\begin{aligned} \kappa &= \mathbb{E} \big[ \psi\big(Z_x\big) \psi\big(Z_y \big) \big] \\ \frac{\partial \kappa}{\partial x_{i}} x_{i} &= \mathbb{E} \big[ \psi'\big(Z_x\big) \psi\big(Z_y \big) Z_{xi}\big] \\ \frac{\partial^2 \kappa}{\partial x_{i} \partial y_{j}} x_{i} y_{j} &= \mathbb{E} \big[ \psi'\big(Z_x\big) \psi'\big(Z_y \big) Z_{xi} Z_{yj} \big] \\ \frac{\partial^2 \kappa}{\partial x_{i} \partial x_{j}} x_{i}x_{j} &= \mathbb{E} \big[ \psi''\big(Z_x\big) \psi\big(Z_y \big) Z_{xi} Z_{xj} \big]\end{aligned}\] and thus \[\begin{align*} \frac{\partial \kappa}{\partial x_{i}} &= \mathbb{E} \big[ \psi'\big(Z_x\big) \psi\big(Z_y \big) Z_{xi}'\big] \\ \frac{\partial^2 \kappa}{\partial x_{i} \partial y_{j}} &= \mathbb{E} \big[ \psi'\big(Z_x\big) \psi'\big(Z_y \big) Z_{xi}' Z_{yj}' \big] \\ \frac{\partial^2 \kappa}{\partial x_{i} \partial x_{j}} &= \mathbb{E} \big[ \psi''\big(Z_x\big) \psi\big(Z_y \big) Z_{xi}' Z_{xj}'\big] . \end{align*}\]

1 Erf kernel

Williams (1996) recover a kernel that corresponds to the Erf sigmoidal activation in the infinite width limit. Let \(\tilde{\mathbf{x}}=\left(1, x_{1}, \ldots, x_{d}\right)\) be an augmented copy of the inputs with a 1 prepended so that it includes the bias, and let \(\Sigma\) be the covariance matrix of the weights (which are usually isotropic, \(\Sigma=\mathrm{I}\) ). Then \(K_{\mathrm{erf}}\left(\mathbf{x}, \mathbf{y}\right)\) can be written as \[ K_{\mathrm{erf}}\left(\mathbf{x}, \mathbf{y}\right)=\frac{1}{(2 \pi)^{\frac{d+1}{2}}|\Sigma|^{1 / 2}} \int \Phi\left(\mathbf{w}^{\top} \tilde{\mathbf{x}}\right) \Phi\left(\mathbf{w}^{\top} \tilde{\mathbf{y}}\right) \exp \left(-\frac{1}{2} \mathbf{w}^{\top} \Sigma^{-1} \mathbf{w}\right) \mathrm{d}\mathbf{w}. \] This integral can be evaluated analytically to give

\[ K_{\mathrm{erf}}(\mathbf{x}, \mathbf{y}) =\frac{2}{\pi} \sin^{-1} \frac{ 2 \tilde{\mathbf{x}}^{\top} \Sigma \tilde{\mathbf{y}} }{ \sqrt{\left( 1+2 \tilde{\mathbf{x}}^{\top} \Sigma \tilde{\mathbf{x}} \right)\left( 1+2 \tilde{\mathbf{y}}^{\top} \Sigma \tilde{\mathbf{y}} \right)}}. \]

If there is no bias term, you can lop those tildes off and a factor of \(\sqrt{2\pi}\) and the result should still hold. If the weights are isotropic, the $ $s vanish also.

2 Arc-cosine kernel

An interesting dot-product kernel is the arc-cosine kernel (Cho and Saul 2009):

\[ K_{n}(\mathbf{x}, \mathbf{y})= \frac{2}{(2 \pi)^{\frac{d}{2}}} \int \Theta(\mathbf{w} ^{\top} \mathbf{x}) \Theta(\mathbf{w} ^{\top} \mathbf{y})(\mathbf{w} ^{\top} \mathbf{x})^{n}(\mathbf{w} ^{\top} \mathbf{y})^{n} \exp\left(-\frac{1}{2}\mathbf{w}^{\top}\mathbf{w}\right) \mathrm{d}\mathbf{w} \]

Specifically, \[ K_{n}(\mathbf{x}, \mathbf{y})=\frac{1}{\pi}\|\mathbf{x}\|^{n}\|\mathbf{y}\|^{n} J_{n}(\theta) \] where $J_{n}() $ is given by: \[ J_{n}(\theta)=(-1)^{n}(\sin \theta)^{2 n+1}\left(\frac{1}{\sin \theta} \frac{\partial}{\partial \theta}\right)^{n}\left(\frac{\pi-\theta}{\sin \theta}\right) \] The first few \(J_{n}\) are \[ \begin{array}{l} J_{0}(\theta)=\pi-\theta \\ J_{1}(\theta)=\sin \theta+(\pi-\theta) \cos \theta. \end{array} \] \(J_{1}\) recovers the ReLU activation in the infinite width limit. i.e. The arc-cosine kernel of order \(1\) corresponding to the case where \(\psi\) is the ReLU is \[\begin{aligned} k(\mathbf{x}, \mathbf{y}) &= \frac{\sigma_w^2 \Vert \mathbf{x} \Vert \Vert \mathbf{y} \Vert }{2\pi} \Big( \sin |\theta| + \big(\pi - |\theta| \big) \cos\theta \Big) \end{aligned} \]

Observation: This appears related to Grothendieck’s identity, that any fixed vectors \(u, v \in \mathbb{S}^{n-1},\) we have \[ \mathbb{E} \operatorname{sign}X_{u} \operatorname{sign}X_{v}=\frac{2}{\pi} \arcsin u^{\top} v. \] I don’t have any use for that, it is just a cool identity I wanted to note down. In an aside Djalil Chafaï observes that the Rademacher RV is the distribution over the 1 dimensional sphere, \(\in \mathbb{S}^{0}.\) Is that what makes this go?

3 Absolutely homogenous

Activation functions which are absolutely homogeneous of degree \(r\) satisfying \(\psi(|a|z)=|a|^r\psi(z)\) have additional structure. This class includes the ReLU and leaky ReLU activations (which are also included as the first order arc-cosine kernel above.) It follows from the definition that functions \(f\) drawn from an NN with such an activation a.s. satisfy \(f(|a|\mathbf{x}) = |a|^r f(\mathbf{x})\).

For absolutely homogeneous activation we can sum the derivatives over the coordinate indices \[\begin{aligned} \sum_{i,j=1}^d \frac{\partial^2 \kappa}{\partial x_{i} \partial x_{j}} x_{i} x_{j} &= \mathbb{E} \big[ \psi''\big(Z_x\big) \psi\big(Z_y \big) (Z_x)^2 \big] = 0 \\ \sum_{i,j=1}^d \frac{\partial^2 \kappa}{\partial y_{i} \partial y_{j}} y_{i} y_{j} &= \mathbb{E} \big[ \psi''\big(Z_y\big) \psi\big(Z_x \big) (Z_y)^2 \big] = 0 \\ \sum_{i,j=1}^d \frac{\partial^2 \kappa}{\partial x_{i} \partial y_{j}} x_{i} y_{j}&= \kappa. \end{aligned}\] i.e. \[\begin{aligned} \mathbf{x}\frac{\partial^2 \kappa}{ \partial \mathbf{x}_{p} \partial \mathbf{x}_{q}^\top} \mathbf{y}^{\top} &=\kappa\\ \mathbf{x}\frac{\partial^2 \kappa}{ \partial \mathbf{x}_{p} \partial \mathbf{x}_{p}^\top} \mathbf{x}^{\top} &=0\\ \mathbf{y}\frac{\partial^2 \kappa}{ \partial \mathbf{x}_{q} \partial \mathbf{x}_{q}^\top} \mathbf{y}^{\top} &=0. \end{aligned}\]

4 References

Adlam, Lee, Xiao, et al. 2020. Exploring the Uncertainty Properties of Neural Networks’ Implicit Priors in the Infinite-Width Limit.” arXiv:2010.07355 [Cs, Stat].
Arora, Du, Hu, et al. 2019. “On Exact Computation with an Infinitely Wide Neural Net.” In Advances in Neural Information Processing Systems.
Belkin, Ma, and Mandal. 2018. To Understand Deep Learning We Need to Understand Kernel Learning.” In International Conference on Machine Learning.
Chen, and Xu. 2020. Deep Neural Tangent Kernel and Laplace Kernel Have the Same RKHS.” arXiv:2009.10683 [Cs, Math, Stat].
Cho, and Saul. 2009. Kernel Methods for Deep Learning.” In Proceedings of the 22nd International Conference on Neural Information Processing Systems. NIPS’09.
Domingos. 2020. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine.” arXiv:2012.00152 [Cs, Stat].
Fan, and Wang. 2020. Spectra of the Conjugate Kernel and Neural Tangent Kernel for Linear-Width Neural Networks.” In Advances in Neural Information Processing Systems.
Fort, Dziugaite, Paul, et al. 2020. Deep Learning Versus Kernel Learning: An Empirical Study of Loss Landscape Geometry and the Time Evolution of the Neural Tangent Kernel.” In Advances in Neural Information Processing Systems.
Geifman, Yadav, Kasten, et al. 2020. On the Similarity Between the Laplace and Neural Tangent Kernels.” In arXiv:2007.01580 [Cs, Stat].
He, Lakshminarayanan, and Teh. 2020. Bayesian Deep Ensembles via the Neural Tangent Kernel.” In Advances in Neural Information Processing Systems.
Jacot, Gabriel, and Hongler. 2018. Neural Tangent Kernel: Convergence and Generalization in Neural Networks.” In Advances in Neural Information Processing Systems. NIPS’18.
Neal. 1996. Priors for Infinite Networks.” In Bayesian Learning for Neural Networks. Lecture Notes in Statistics.
Pearce, Tsuchida, Zaki, et al. 2019. “Expressive Priors in Bayesian Neural Networks: Kernel Combinations and Periodic Functions.” In Uncertainty in Artificial Intelligence.
Simon, Anand, and DeWeese. 2022. Reverse Engineering the Neural Tangent Kernel.”
Tsuchida, Roosta, and Gallagher. 2018. Invariance of Weight Distributions in Rectified MLPs.” In International Conference on Machine Learning.
Williams. 1996. Computing with Infinite Networks.” In Proceedings of the 9th International Conference on Neural Information Processing Systems. NIPS’96.