Interpolation, extrapolation and memorisation in neural networks

January 25, 2022 — November 22, 2024

algebra
graphical models
how do science
machine learning
meta learning
networks
probability
statistics
Figure 1: Learn this convex hull.

Interpreting models in terms of their ability to interpolate and memorise, and when that becomes extrapolation to new situations.

If you would like: can we train the models to be smarter than any examples they have been trained upon? When?

Connection to neural scaling, overparameterization, and possibly singular learning theory.

Keywords: Grokking, transcendence, and plain old generalization.

Balestriero, Pesenti, and LeCun (2021):

The notion of interpolation and extrapolation is fundamental in various fields from deep learning to function approximation. Interpolation occurs for a sample x whenever this sample falls inside or on the boundary of the given dataset’s convex hull. Extrapolation occurs when x falls outside of that convex hull. One fundamental (mis)conception is that state-of-the-art algorithms work so well because of their ability to correctly interpolate training data. A second (mis)conception is that interpolation happens throughout tasks and datasets, in fact, many intuitions and theories rely on that assumption. We empirically and theoretically argue against those two points and demonstrate that on any high-dimensional (>100) dataset, interpolation almost surely never happens. Those results challenge the validity of our current interpolation/extrapolation definition as an indicator of generalization performances.

Bubeck and Sellke (2021) argue

Classically, data interpolation with a parametrized model class is possible as long as the number of parameters is larger than the number of equations to be satisfied. A puzzling phenomenon in the current practice of deep learning is that models are trained with many more parameters than what this classical theory would suggest. We propose a theoretical explanation for this phenomenon. We prove that for a broad class of data distributions and model classes, overparametrization is necessary if one wants to interpolate the data smoothly. Namely we show that smooth interpolation requires \(d\) times more parameters than mere interpolation, where \(d\) is the ambient data dimension. We prove this universal law of robustness for any smoothly parametrized function class with polynomial size weights, and any covariate distribution verifying isoperimetry. In the case of two-layers neural networks and Gaussian covariates, this law was conjectured in prior work by Bubeck, Li and Nagaraj. We also give an interpretation of our result as an improved generalization bound for model classes consisting of smooth functions.

There are other works in this domain (Le 2018; Ma, Bassily, and Belkin 2018; C. Zhang et al. 2017, 2021).

I am not sure if this is a distinct thing from other double descent phenomena. Hastie et al. (2020) suggests perhaps not?

Interpolators — estimators that achieve zero training error — have attracted growing attention in machine learning, mainly because state-of-the-art neural networks appear to be models of this type. In this paper, we study minimum \(\ell_2\) norm (“ridgeless”) interpolation in high-dimensional least squares regression. We consider two different models for the feature distribution: a linear model, where the feature vectors \(x_i \in {\mathbb R}^p\) are obtained by applying a linear transform to a vector of i.i.d. entries, \(x_i = \Sigma^{1/2} z_i\) (with \(z_i \in {\mathbb R}^p\)); and a nonlinear model, where the feature vectors are obtained by passing the input through a random one-layer neural network, \(x_i = \varphi(W z_i)\) (with \(z_i \in {\mathbb R}^d\), \(W \in {\mathbb R}^{p \times d}\) a matrix of i.i.d. entries, and \(\varphi\) an activation function acting componentwise on \(W z_i\)). We recover — in a precise quantitative way — several phenomena that have been observed in large-scale neural networks and kernel machines, including the “double descent” behaviour of the prediction risk, and the potential benefits of overparametrization.

E. Zhang et al. (2024):

Generative models are trained with the simple objective of imitating the conditional probability distribution induced by the data they are trained on. Therefore, when trained on data generated by humans, we may not expect the artificial model to outperform the humans on their original objectives. In this work, we study the phenomenon of transcendence: when a generative model achieves capabilities that surpass the abilities of the experts generating its data. We demonstrate transcendence by training an autoregressive transformer to play chess from game transcripts, and show that the trained model can sometimes achieve better performance than all players in the dataset. We theoretically prove that transcendence can be enabled by low-temperature sampling, and rigorously assess this claim experimentally. Finally, we discuss other sources of transcendence, laying the groundwork for future investigation of this phenomenon in a broader setting.

Their case study is, yes, chess.

1 References

Alabdulmohsin, Neyshabur, and Zhai. 2022. Revisiting Neural Scaling Laws in Language and Vision.” Advances in Neural Information Processing Systems.
Balestriero, Pesenti, and LeCun. 2021. Learning in High Dimension Always Amounts to Extrapolation.”
Bartlett, Montanari, and Rakhlin. 2021. Deep Learning: A Statistical Viewpoint.” Acta Numerica.
Belkin. 2021. Fit Without Fear: Remarkable Mathematical Phenomena of Deep Learning Through the Prism of Interpolation.” Acta Numerica.
Belkin, Hsu, Ma, et al. 2019. Reconciling Modern Machine-Learning Practice and the Classical Bias–Variance Trade-Off.” Proceedings of the National Academy of Sciences.
Bubeck, and Sellke. 2021. A Universal Law of Robustness via Isoperimetry.” In.
Domingos. 2020. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine.” arXiv:2012.00152 [Cs, Stat].
Feldman. 2020. Does Learning Require Memorization? A Short Tale about a Long Tail.” In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing. STOC 2020.
Grosse, Bae, Anil, et al. 2023. Studying Large Language Model Generalization with Influence Functions.”
Hasson, Nastase, and Goldstein. 2020. Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks.” Neuron.
Hastie, Montanari, Rosset, et al. 2020. Surprises in High-Dimensional Ridgeless Least Squares Interpolation.”
Hoel. 2021. The Overfitted Brain: Dreams Evolved to Assist Generalization.” Patterns.
Le. 2018. A Bayesian Perspective on Generalization and Stochastic Gradient Descent.” In.
Loog, Viering, Mey, et al. 2020. A Brief Prehistory of Double Descent.” Proceedings of the National Academy of Sciences.
Ma, Bassily, and Belkin. 2018. The Power of Interpolation: Understanding the Effectiveness of SGD in Modern Over-Parametrized Learning.” In Proceedings of the 35th International Conference on Machine Learning.
Misra, and Mahowald. 2024. Language Models Learn Rare Phenomena from Less Rare Phenomena: The Case of the Missing AANNs.”
Nakkiran, Kaplun, Bansal, et al. 2019. Deep Double Descent: Where Bigger Models and More Data Hurt.” arXiv:1912.02292 [Cs, Stat].
Papyan, Han, and Donoho. 2020. Prevalence of Neural Collapse During the Terminal Phase of Deep Learning Training.” Proceedings of the National Academy of Sciences.
Power, Burda, Edwards, et al. 2022. Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets.”
Refinetti, d’Ascoli, Ohana, et al. 2021. Align, Then Memorise: The Dynamics of Learning with Feedback Alignment.” arXiv:2011.12428 [Cond-Mat, Stat].
Webb, Dulberg, Frankland, et al. 2020. Learning Representations That Support Extrapolation.” In Proceedings of the 37th International Conference on Machine Learning.
Wilson, and Izmailov. 2020. Bayesian Deep Learning and a Probabilistic Perspective of Generalization.”
Xu, Zhang, Li, et al. 2021. How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks.”
Zhang, Chiyuan, Bengio, Hardt, et al. 2017. Understanding Deep Learning Requires Rethinking Generalization.” In Proceedings of ICLR.
———, et al. 2021. Understanding Deep Learning (Still) Requires Rethinking Generalization.” Communications of the ACM.
Zhang, Edwin, Zhu, Saphra, et al. 2024. Transcendence: Generative Models Can Outperform The Experts That Train Them.”
Zhan, Xie, Mao, et al. 2022. Evaluating Interpolation and Extrapolation Performance of Neural Retrieval Models.” In Proceedings of the 31st ACM International Conference on Information & Knowledge Management. CIKM ’22.