Infinite width limits of neural networks


Large-width limits of neural nets

Neural Network Gaussian Process

See Neural network Gaussian process on Wikipedia.

The field that sprang from the insight (Neal 1996a) that in the infinite limit deep NNs asymptotically approach Gaussian processes, and there are theories we can draw from that. Far from the infinite limit there are neural nets which exploit this. See random neural nets.

Neural Network Tangent Kernel

See Neural tangent kernel on Wikipedia.

Ferenc Huszár provides some Intuition on the Neural Tangent Kernel, i.e. the paper (Lee et al. 2019).

It turns out the neural tangent kernel becomes particularly useful when studying learning dynamics in infinitely wide feed-forward neural networks. Why? Because in this limit, two things happen:

  1. First: if we initialize \(θ_0\) randomly from appropriately chosen distributions, the initial NTK of the network \(k_{θ_0}\) approaches a deterministic kernel as the width increases. This means, that at initialization, \(k_{θ_0}\) doesn’t really depend on \(k_{θ_0}\) but is a fixed kernel independent of the specific initialization.-
  2. Second: in the infinite limit the kernel \(k_{θ_t}\) stays constant over time as we optimise \(\theta_t\). This removes the parameter dependence during training.

These two facts put together imply that gradient descent in the infinitely wide and infinitesimally small learning rate limit can be understood as a pretty simple algorithm called kernel gradient descent with a fixed kernel function that depends only on the architecture (number of layers, activations, etc).

These results, taken together with an older known result by Neal, (1994), allows us to characterise the probability distribution of minima that gradient descent converges to in this infinite limit as a Gaussian process.

Implicit regularization

Here’s one interesting perspective (Zhang et al. 2017).

  • The effective capacity of neural networks is large enough for a brute-force memorization of the entire data set.

  • Even optimization on random labels remains easy. In fact, training time increases only by a small constant factor compared with training on the true labels.

  • Randomizing labels is solely a data transformation, leaving all other properties of the learning problem unchanged.

[…] Explicit regularization may improve generalization performance, but is neither necessary nor by itself sufficient for controlling generalization error. […] Appealing to linear models, we analyze how SGD acts as an implicit regularizer.

Dropout

turns out to be a type of regularization with a natural GP interpretation.

References

Adlam, Ben, Jaehoon Lee, Lechao Xiao, Jeffrey Pennington, and Jasper Snoek. 2020. “Exploring the Uncertainty Properties of Neural NetworksImplicit Priors in the Infinite-Width Limit.” October 14, 2020. http://arxiv.org/abs/2010.07355.
Arora, Sanjeev, Simon S Du, Wei Hu, Zhiyuan Li, Russ R Salakhutdinov, and Ruosong Wang. n.d. “On Exact Computation with an Infinitely Wide Neural Net,” 10.
Belkin, Mikhail, Siyuan Ma, and Soumik Mandal. 2018. “To Understand Deep Learning We Need to Understand Kernel Learning.” In International Conference on Machine Learning, 541–49. http://proceedings.mlr.press/v80/belkin18a.html.
Chen, Lin, and Sheng Xu. 2020. “Deep Neural Tangent Kernel and Laplace Kernel Have the Same RKHS.” October 1, 2020. http://arxiv.org/abs/2009.10683.
Cho, Youngmin, and Lawrence K. Saul. 2009. “Kernel Methods for Deep Learning.” In Proceedings of the 22nd International Conference on Neural Information Processing Systems, 22:342–50. NIPS’09. Red Hook, NY, USA: Curran Associates Inc. https://papers.nips.cc/paper/2009/hash/5751ec3e9a4feab575962e78e006250d-Abstract.html.
Domingos, Pedro. 2020. “Every Model Learned by Gradient Descent Is Approximately a Kernel Machine.” November 30, 2020. http://arxiv.org/abs/2012.00152.
Fan, Zhou, and Zhichao Wang. 2020. “Spectra of the Conjugate Kernel and Neural Tangent Kernel for Linear-Width Neural Networks.” In Advances in Neural Information Processing Systems, 33:12. https://proceedings.neurips.cc//paper_files/paper/2020/hash/572201a4497b0b9f02d4f279b09ec30d-Abstract.html.
Fort, Stanislav, Gintare Karolina Dziugaite, Mansheej Paul, Sepideh Kharaghani, Daniel M. Roy, and Surya Ganguli. 2020. “Deep Learning Versus Kernel Learning: An Empirical Study of Loss Landscape Geometry and the Time Evolution of the Neural Tangent Kernel.” In Advances in Neural Information Processing Systems. Vol. 33. https://proceedings.neurips.cc//paper_files/paper/2020/hash/405075699f065e43581f27d67bb68478-Abstract.html.
Gal, Yarin, and Zoubin Ghahramani. 2015. “Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning.” In Proceedings of the 33rd International Conference on Machine Learning (ICML-16). http://arxiv.org/abs/1506.02142.
———. 2016. “A Theoretically Grounded Application of Dropout in Recurrent Neural Networks.” In. http://arxiv.org/abs/1512.05287.
Garis Matthews, Alexander Graeme de, Mark Rowland, Jiri Hron, Richard E. Turner, and Zoubin Ghahramani. 2018. “Gaussian Process Behaviour in Wide Deep Neural Networks.” August 16, 2018. http://arxiv.org/abs/1804.11271.
Geifman, Amnon, Abhay Yadav, Yoni Kasten, Meirav Galun, David Jacobs, and Ronen Basri. 2020. “On the Similarity Between the Laplace and Neural Tangent Kernels.” In. http://arxiv.org/abs/2007.01580.
Ghahramani, Zoubin. 2013. “Bayesian Non-Parametrics and the Probabilistic Approach to Modelling.” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 371 (1984): 20110553. https://doi.org/10.1098/rsta.2011.0553.
Giryes, R., G. Sapiro, and A. M. Bronstein. 2016. “Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?” IEEE Transactions on Signal Processing 64 (13): 3444–57. https://doi.org/10.1109/TSP.2016.2546221.
He, Bobby, Balaji Lakshminarayanan, and Yee Whye Teh. 2020. “Bayesian Deep Ensembles via the Neural Tangent Kernel.” In Advances in Neural Information Processing Systems. Vol. 33. https://proceedings.neurips.cc//paper_files/paper/2020/hash/0b1ec366924b26fc98fa7b71a9c249cf-Abstract.html.
Jacot, Arthur, Franck Gabriel, and Clement Hongler. 2018. “Neural Tangent Kernel: Convergence and Generalization in Neural Networks.” In Advances in Neural Information Processing Systems, 31:8571–80. NIPS’18. Red Hook, NY, USA: Curran Associates Inc. https://papers.nips.cc/paper/2018/hash/5a4be1fa34e62bb8a6ec6b91d2462f5a-Abstract.html.
Karakida, Ryo, and Kazuki Osawa. 2020. “Understanding Approximate Fisher Information for Fast Convergence of Natural Gradient Descent in Wide Neural Networks.” Advances in Neural Information Processing Systems 33. https://proceedings.neurips.cc//paper_files/paper/2020/hash/7b41bfa5085806dfa24b8c9de0ce567f-Abstract.html.
Lee, Jaehoon, Yasaman Bahri, Roman Novak, Samuel S. Schoenholz, Jeffrey Pennington, and Jascha Sohl-Dickstein. 2018. “Deep Neural Networks as Gaussian Processes.” In ICLR. http://arxiv.org/abs/1711.00165.
Lee, Jaehoon, Lechao Xiao, Samuel S. Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl-Dickstein, and Jeffrey Pennington. 2019. “Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent.” December 8, 2019. http://arxiv.org/abs/1902.06720.
Meronen, Lassi, Christabella Irwanto, and Arno Solin. 2020. “Stationary Activations for Uncertainty Calibration in Deep Learning.” In Advances in Neural Information Processing Systems. Vol. 33. https://proceedings.neurips.cc//paper_files/paper/2020/hash/18a411989b47ed75a60ac69d9da05aa5-Abstract.html.
Neal, Radford M. 1996a. Bayesian Learning for Neural Networks. Vol. 118. Secaucus, NJ, USA: Springer-Verlag New York, Inc. http://www.csri.utoronto.ca/~radford/ftp/thesis.pdf.
———. 1996b. “Priors for Infinite Networks.” In Bayesian Learning for Neural Networks, edited by Radford M. Neal, 29–53. Lecture Notes in Statistics. New York, NY: Springer. https://doi.org/10.1007/978-1-4612-0745-0_2.
Novak, Roman, Lechao Xiao, Jiri Hron, Jaehoon Lee, Alexander A. Alemi, Jascha Sohl-Dickstein, and Samuel S. Schoenholz. 2019. “Neural Tangents: Fast and Easy Infinite Neural Networks in Python.” December 5, 2019. http://arxiv.org/abs/1912.02803.
Novak, Roman, Lechao Xiao, Jaehoon Lee, Yasaman Bahri, Greg Yang, Jiri Hron, Daniel A. Abolafia, Jeffrey Pennington, and Jascha Sohl-Dickstein. 2020. “Bayesian Deep Convolutional Networks with Many Channels Are Gaussian Processes.” August 21, 2020. http://arxiv.org/abs/1810.05148.
Pearce, Tim, Russell Tsuchida, Mohamed Zaki, Alexandra Brintrup, and Andy Neely. n.d. “Expressive Priors in Bayesian Neural Networks: Kernel Combinations and Periodic Functions,” 11.
Williams, Christopher K. I. 1996. “Computing with Infinite Networks.” In Proceedings of the 9th International Conference on Neural Information Processing Systems, 295–301. NIPS’96. Cambridge, MA, USA: MIT Press. https://openreview.net/forum?id=S1V2dDbdZH.
Yang, Greg. 2019. “Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture Are Gaussian Processes.” December 7, 2019. http://arxiv.org/abs/1910.12478.
Yang, Greg, and Edward J. Hu. 2020. “Feature Learning in Infinite-Width Neural Networks.” November 29, 2020. http://arxiv.org/abs/2011.14522.
Zhang, Chiyuan, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. 2017. “Understanding Deep Learning Requires Rethinking Generalization.” In Proceedings of ICLR. http://arxiv.org/abs/1611.03530.