Notes on the general technique of increasing the number of slack parameters you have, especially in machine learning, especially especially in neural nets.
This insight is fresh. Bubeck and Sellke (2021) argue
Classically, data interpolation with a parametrized model class is possible as long as the number of parameters is larger than the number of equations to be satisfied. A puzzling phenomenon in the current practice of deep learning is that models are trained with many more parameters than what this classical theory would suggest. We propose a theoretical explanation for this phenomenon. We prove that for a broad class of data distributions and model classes, overparametrization is necessary if one wants to interpolate the data smoothly. Namely we show that smooth interpolation requires \(d\) times more parameters than mere interpolation, where \(d\) is the ambient data dimension. We prove this universal law of robustness for any smoothly parametrized function class with polynomial size weights, and any covariate distribution verifying isoperimetry. In the case of two-layers neural networks and Gaussian covariates, this law was conjectured in prior work by Bubeck, Li and Nagaraj. We also give an interpretation of our result as an improved generalization bound for model classes consisting of smooth functions.
In the wide-network limit
See Wide NNs.
For making optimisation nice
The combination of overparameterization and SGD is argued to be the secret to how deep learning works, by Zeyuan Allen-Zhu, Yuanzhi Li and Zhao Song. Certainly, looking at how some classic optimizations can be lifted in to convex problems, we can imagine that something similar happens by analaogy here.
RJ Lipton discusses Arno van den Essen’s incidental work on stabilisation methods of polynomials, which relates. AFAICT, to transfer-function-type stability. Does this connect to the overparameterization of rational transfer function analysis of Hardt, Ma, and Recht (2018)? 🏗.
When adding data (or parameters?) can make the model worse. E.g. Deep Double Descent
Lottery ticket hypothesis
The Lottery Ticket hypothesis (Frankle and Carbin 2019; Hayou et al. 2020) asserts something like “there is a good compact network hidden inside the overparameterized one you have”. Intuitively it is computationally hard to find the hidden optimal network. I am interested in computational bounds for this; How much cheaper is it to calculate with a massive network than to find the tiny networks that does better? The calculus here is altered by SIMD architectures such as GPUs, which change the relative cost (although not the scaling) of certain types of calculations, which is arguably how we got to the modern form of neural net obsession.