Interpolation, extrapolation and memorisation in neural networks

January 25, 2022 — October 30, 2024

algebra
graphical models
how do science
machine learning
meta learning
networks
probability
statistics
Figure 1: Learn this convex hull.

Interpreting models in terms of their ability to interpolate and memorise, and when that becomes extrapolation. Connection to neural scaling, overparameterization, and possibly singular learning theory.

Balestriero, Pesenti, and LeCun (2021):

The notion of interpolation and extrapolation is fundamental in various fields from deep learning to function approximation. Interpolation occurs for a sample x whenever this sample falls inside or on the boundary of the given dataset’s convex hull. Extrapolation occurs when x falls outside of that convex hull. One fundamental (mis)conception is that state-of-the-art algorithms work so well because of their ability to correctly interpolate training data. A second (mis)conception is that interpolation happens throughout tasks and datasets, in fact, many intuitions and theories rely on that assumption. We empirically and theoretically argue against those two points and demonstrate that on any high-dimensional (>100) dataset, interpolation almost surely never happens. Those results challenge the validity of our current interpolation/extrapolation definition as an indicator of generalization performances.

Bubeck and Sellke (2021) argue

Classically, data interpolation with a parametrized model class is possible as long as the number of parameters is larger than the number of equations to be satisfied. A puzzling phenomenon in the current practice of deep learning is that models are trained with many more parameters than what this classical theory would suggest. We propose a theoretical explanation for this phenomenon. We prove that for a broad class of data distributions and model classes, overparametrization is necessary if one wants to interpolate the data smoothly. Namely we show that smooth interpolation requires \(d\) times more parameters than mere interpolation, where \(d\) is the ambient data dimension. We prove this universal law of robustness for any smoothly parametrized function class with polynomial size weights, and any covariate distribution verifying isoperimetry. In the case of two-layers neural networks and Gaussian covariates, this law was conjectured in prior work by Bubeck, Li and Nagaraj. We also give an interpretation of our result as an improved generalization bound for model classes consisting of smooth functions.

There are other works in this domain (Le 2018; Ma, Bassily, and Belkin 2018; Zhang et al. 2017, 2021).

I am not sure if this is a distinct thing from other double descent phenomena. Hastie et al. (2020) suggests perhaps not?

Interpolators — estimators that achieve zero training error — have attracted growing attention in machine learning, mainly because state-of-the-art neural networks appear to be models of this type. In this paper, we study minimum \(\ell_2\) norm (“ridgeless”) interpolation in high-dimensional least squares regression. We consider two different models for the feature distribution: a linear model, where the feature vectors \(x_i \in {\mathbb R}^p\) are obtained by applying a linear transform to a vector of i.i.d. entries, \(x_i = \Sigma^{1/2} z_i\) (with \(z_i \in {\mathbb R}^p\)); and a nonlinear model, where the feature vectors are obtained by passing the input through a random one-layer neural network, \(x_i = \varphi(W z_i)\) (with \(z_i \in {\mathbb R}^d\), \(W \in {\mathbb R}^{p \times d}\) a matrix of i.i.d. entries, and \(\varphi\) an activation function acting componentwise on \(W z_i\)). We recover — in a precise quantitative way — several phenomena that have been observed in large-scale neural networks and kernel machines, including the “double descent” behaviour of the prediction risk, and the potential benefits of overparametrization.

1 References

Alabdulmohsin, Neyshabur, and Zhai. 2022. Revisiting Neural Scaling Laws in Language and Vision.” Advances in Neural Information Processing Systems.
Balestriero, Pesenti, and LeCun. 2021. Learning in High Dimension Always Amounts to Extrapolation.”
Bartlett, Montanari, and Rakhlin. 2021. Deep Learning: A Statistical Viewpoint.” Acta Numerica.
Belkin. 2021. Fit Without Fear: Remarkable Mathematical Phenomena of Deep Learning Through the Prism of Interpolation.” Acta Numerica.
Belkin, Hsu, Ma, et al. 2019. Reconciling Modern Machine-Learning Practice and the Classical Bias–Variance Trade-Off.” Proceedings of the National Academy of Sciences.
Bubeck, and Sellke. 2021. A Universal Law of Robustness via Isoperimetry.” In.
Domingos. 2020. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine.” arXiv:2012.00152 [Cs, Stat].
Feldman. 2020. Does Learning Require Memorization? A Short Tale about a Long Tail.” In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing. STOC 2020.
Grosse, Bae, Anil, et al. 2023. Studying Large Language Model Generalization with Influence Functions.”
Hasson, Nastase, and Goldstein. 2020. Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks.” Neuron.
Hastie, Montanari, Rosset, et al. 2020. Surprises in High-Dimensional Ridgeless Least Squares Interpolation.”
Hoel. 2021. The Overfitted Brain: Dreams Evolved to Assist Generalization.” Patterns.
Le. 2018. A Bayesian Perspective on Generalization and Stochastic Gradient Descent.” In.
Loog, Viering, Mey, et al. 2020. A Brief Prehistory of Double Descent.” Proceedings of the National Academy of Sciences.
Ma, Bassily, and Belkin. 2018. The Power of Interpolation: Understanding the Effectiveness of SGD in Modern Over-Parametrized Learning.” In Proceedings of the 35th International Conference on Machine Learning.
Nakkiran, Kaplun, Bansal, et al. 2019. Deep Double Descent: Where Bigger Models and More Data Hurt.” arXiv:1912.02292 [Cs, Stat].
Papyan, Han, and Donoho. 2020. Prevalence of Neural Collapse During the Terminal Phase of Deep Learning Training.” Proceedings of the National Academy of Sciences.
Power, Burda, Edwards, et al. 2022. Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets.”
Refinetti, d’Ascoli, Ohana, et al. 2021. Align, Then Memorise: The Dynamics of Learning with Feedback Alignment.” arXiv:2011.12428 [Cond-Mat, Stat].
Webb, Dulberg, Frankland, et al. 2020. Learning Representations That Support Extrapolation.” In Proceedings of the 37th International Conference on Machine Learning.
Wilson, and Izmailov. 2020. Bayesian Deep Learning and a Probabilistic Perspective of Generalization.”
Xu, Zhang, Li, et al. 2021. How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks.”
Zhang, Bengio, Hardt, et al. 2017. Understanding Deep Learning Requires Rethinking Generalization.” In Proceedings of ICLR.
———, et al. 2021. Understanding Deep Learning (Still) Requires Rethinking Generalization.” Communications of the ACM.
Zhan, Xie, Mao, et al. 2022. Evaluating Interpolation and Extrapolation Performance of Neural Retrieval Models.” In Proceedings of the 31st ACM International Conference on Information & Knowledge Management. CIKM ’22.