Infinite width limits of neural networks



Large-width limits of neural nets. An interesting way of considering overparameterization.

Neural Network Gaussian Process

For now: See Neural network Gaussian process on Wikipedia.

The field that sprang from the insight (Neal 1996a) that in the infinite limit, random neural nets with Gaussian weights and appropriate scaling asymptotically approach certain special Gaussian processes, and there are useful conclusions we can draw from that.

More generally we might consider correlated and/or non-Gaussian weights, and deep networks. Unless otherwise stated though, I am thinking about i.i.d. Gaussian weights, and a single hidden layer.

In this single-hidden-layer case we get tractable covariance structure. See NN kernels.

For some reason this evokes multi-layer wide NNs for me

Neural Network Tangent Kernel

NTK? See Neural Tangent Kernel.

Implicit regularization

Here’s one interesting perspective on wide nets (Zhang et al. 2017) which looks rather like the NTK model, but is it? To read.

  • The effective capacity of neural networks is large enough for a brute-force memorization of the entire data set.

  • Even optimization on random labels remains easy. In fact, training time increases only by a small constant factor compared with training on the true labels.

  • Randomizing labels is solely a data transformation, leaving all other properties of the learning problem unchanged.

[…] Explicit regularization may improve generalization performance, but is neither necessary nor by itself sufficient for controlling generalization error. […] Appealing to linear models, we analyze how SGD acts as an implicit regularizer.

Dropout

Dropout is sometimes presumed to simulate from a certain kind of Gaussian process out of a neural net. See Dropout.

As stochastic DEs

We can find an SDE for a given NN-style kernel if we can find Green’s functions \(\sigma^2_\varepsilon \langle G_\cdot(\mathbf{x}_p), G_\cdot(\mathbf{x}_q)\rangle = \mathbb{E} \big[ \psi\big(Z_p\big) \psi\big(Z_q \big) \big].\) Russell Tsuchida observes: if you set \(G_\mathbf{s}(\mathbf{x}_p) = \psi(\mathbf{s}^\top \mathbf{x}_p) \sqrt{\phi(\mathbf{s})}\), where \(\phi\) is the pdf of an independent standard multivariate normal vector is a solution.

References

Adlam, Ben, Jaehoon Lee, Lechao Xiao, Jeffrey Pennington, and Jasper Snoek. 2020. β€œExploring the Uncertainty Properties of Neural Networks’ Implicit Priors in the Infinite-Width Limit.” arXiv:2010.07355 [Cs, Stat], October.
Arora, Sanjeev, Simon S Du, Wei Hu, Zhiyuan Li, Russ R Salakhutdinov, and Ruosong Wang. 2019. β€œOn Exact Computation with an Infinitely Wide Neural Net.” In Advances in Neural Information Processing Systems, 10.
Bai, Yu, and Jason D. Lee. 2020. β€œBeyond Linearization: On Quadratic and Higher-Order Approximation of Wide Neural Networks.” arXiv:1910.01619 [Cs, Math, Stat], February.
Belkin, Mikhail, Siyuan Ma, and Soumik Mandal. 2018. β€œTo Understand Deep Learning We Need to Understand Kernel Learning.” In International Conference on Machine Learning, 541–49.
Chen, Lin, and Sheng Xu. 2020. β€œDeep Neural Tangent Kernel and Laplace Kernel Have the Same RKHS.” arXiv:2009.10683 [Cs, Math, Stat], October.
Chen, Minshuo, Yu Bai, Jason D. Lee, Tuo Zhao, Huan Wang, Caiming Xiong, and Richard Socher. 2021. β€œTowards Understanding Hierarchical Learning: Benefits of Neural Representations.” arXiv:2006.13436 [Cs, Stat], March.
Cho, Youngmin, and Lawrence K. Saul. 2009. β€œKernel Methods for Deep Learning.” In Proceedings of the 22nd International Conference on Neural Information Processing Systems, 22:342–50. NIPS’09. Red Hook, NY, USA: Curran Associates Inc.
Domingos, Pedro. 2020. β€œEvery Model Learned by Gradient Descent Is Approximately a Kernel Machine.” arXiv:2012.00152 [Cs, Stat], November.
Fan, Zhou, and Zhichao Wang. 2020. β€œSpectra of the Conjugate Kernel and Neural Tangent Kernel for Linear-Width Neural Networks.” In Advances in Neural Information Processing Systems, 33:12.
Fort, Stanislav, Gintare Karolina Dziugaite, Mansheej Paul, Sepideh Kharaghani, Daniel M. Roy, and Surya Ganguli. 2020. β€œDeep Learning Versus Kernel Learning: An Empirical Study of Loss Landscape Geometry and the Time Evolution of the Neural Tangent Kernel.” In Advances in Neural Information Processing Systems. Vol. 33.
Gal, Yarin, and Zoubin Ghahramani. 2015. β€œDropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning.” In Proceedings of the 33rd International Conference on Machine Learning (ICML-16).
β€”β€”β€”. 2016. β€œA Theoretically Grounded Application of Dropout in Recurrent Neural Networks.” In arXiv:1512.05287 [Stat].
Geifman, Amnon, Abhay Yadav, Yoni Kasten, Meirav Galun, David Jacobs, and Ronen Basri. 2020. β€œOn the Similarity Between the Laplace and Neural Tangent Kernels.” In arXiv:2007.01580 [Cs, Stat].
Ghahramani, Zoubin. 2013. β€œBayesian Non-Parametrics and the Probabilistic Approach to Modelling.” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 371 (1984): 20110553.
Girosi, Federico, Michael Jones, and Tomaso Poggio. 1995. β€œRegularization Theory and Neural Networks Architectures.” Neural Computation 7 (2): 219–69.
Giryes, R., G. Sapiro, and A. M. Bronstein. 2016. β€œDeep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?” IEEE Transactions on Signal Processing 64 (13): 3444–57.
He, Bobby, Balaji Lakshminarayanan, and Yee Whye Teh. 2020. β€œBayesian Deep Ensembles via the Neural Tangent Kernel.” In Advances in Neural Information Processing Systems. Vol. 33.
Jacot, Arthur, Franck Gabriel, and Clement Hongler. 2018. β€œNeural Tangent Kernel: Convergence and Generalization in Neural Networks.” In Advances in Neural Information Processing Systems, 31:8571–80. NIPS’18. Red Hook, NY, USA: Curran Associates Inc.
Karakida, Ryo, and Kazuki Osawa. 2020. β€œUnderstanding Approximate Fisher Information for Fast Convergence of Natural Gradient Descent in Wide Neural Networks.” Advances in Neural Information Processing Systems 33.
Kristiadi, Agustinus, Matthias Hein, and Philipp Hennig. 2021. β€œAn Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their Asymptotic Overconfidence.” Advances in Neural Information Processing Systems 34: 18789–800.
Lee, Jaehoon, Yasaman Bahri, Roman Novak, Samuel S. Schoenholz, Jeffrey Pennington, and Jascha Sohl-Dickstein. 2018. β€œDeep Neural Networks as Gaussian Processes.” In ICLR.
Lee, Jaehoon, Lechao Xiao, Samuel S. Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl-Dickstein, and Jeffrey Pennington. 2019. β€œWide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent.” In Advances in Neural Information Processing Systems, 8570–81.
Matthews, Alexander Graeme de Garis, Mark Rowland, Jiri Hron, Richard E. Turner, and Zoubin Ghahramani. 2018. β€œGaussian Process Behaviour in Wide Deep Neural Networks.” In arXiv:1804.11271 [Cs, Stat].
Meronen, Lassi, Christabella Irwanto, and Arno Solin. 2020. β€œStationary Activations for Uncertainty Calibration in Deep Learning.” In Advances in Neural Information Processing Systems. Vol. 33.
Neal, Radford M. 1996a. β€œBayesian Learning for Neural Networks.” Secaucus, NJ, USA: Springer-Verlag New York, Inc.
β€”β€”β€”. 1996b. β€œPriors for Infinite Networks.” In Bayesian Learning for Neural Networks, edited by Radford M. Neal, 29–53. Lecture Notes in Statistics. New York, NY: Springer.
Novak, Roman, Lechao Xiao, Jiri Hron, Jaehoon Lee, Alexander A. Alemi, Jascha Sohl-Dickstein, and Samuel S. Schoenholz. 2019. β€œNeural Tangents: Fast and Easy Infinite Neural Networks in Python.” arXiv:1912.02803 [Cs, Stat], December.
Novak, Roman, Lechao Xiao, Jaehoon Lee, Yasaman Bahri, Greg Yang, Jiri Hron, Daniel A. Abolafia, Jeffrey Pennington, and Jascha Sohl-Dickstein. 2020. β€œBayesian Deep Convolutional Networks with Many Channels Are Gaussian Processes.” In The International Conference on Learning Representations.
Pearce, Tim, Russell Tsuchida, Mohamed Zaki, Alexandra Brintrup, and Andy Neely. 2019. β€œExpressive Priors in Bayesian Neural Networks: Kernel Combinations and Periodic Functions.” In Uncertainty in Artificial Intelligence, 11.
Sachdeva, Noveen, Mehak Preet Dhaliwal, Carole-Jean Wu, and Julian McAuley. 2022. β€œInfinite Recommendation Networks: A Data-Centric Approach.” arXiv.
Tancik, Matthew, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, and Ren Ng. 2020. β€œFourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains.” arXiv:2006.10739 [Cs], June.
Williams, Christopher K. I. 1996. β€œComputing with Infinite Networks.” In Proceedings of the 9th International Conference on Neural Information Processing Systems, 295–301. NIPS’96. Cambridge, MA, USA: MIT Press.
Yang, Greg. 2019. β€œTensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture Are Gaussian Processes.” arXiv:1910.12478 [Cond-Mat, Physics:math-Ph], December.
Yang, Greg, and Edward J. Hu. 2020. β€œFeature Learning in Infinite-Width Neural Networks.” arXiv:2011.14522 [Cond-Mat], November.
Zhang, Chiyuan, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. 2017. β€œUnderstanding Deep Learning Requires Rethinking Generalization.” In Proceedings of ICLR.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.