Notes on the general weird behaviour of increasing the number of slack parameters we use, especially in machine learning, especially especially in neural nets. Most of these have far more parameters than we βneedβ which is a problem for classical models of learning, herein we learn to fear having to many parameters.
For making optimisation nice
Certainly, looking at how some classic non-convex optimization problems can be lifted in to convex problem by adding slack variables, we can imagine that something similar happens by analogy in neural nets. Is it enough to imagine that something similar happens in NN, perhaps not lifting them into convex problems _per_se but at least into better-behaved optimisations in some sense?
The combination of overparameterization and SGD is argued to be the secret to how deep learning works, by e.g. AllenZhuConvergence2018.
RJ Lipton discusses Arno van den Essenβs incidental work on stabilisation methods of polynomials, which relates, AFAICT, to transfer-function-type stability. Does this connect to the overparameterization of rational transfer function analysis of Hardt, Ma, and Recht (2018)? π.
Double descent
When adding data (or parameters?) can make the model worse. E.g. Deep Double Descent.
Possibly this phenomenon relates to the concept of β¦
Data interpolation
a.k.a. benign overfitting. Bubeck and Sellke (2021) argue
Classically, data interpolation with a parametrized model class is possible as long as the number of parameters is larger than the number of equations to be satisfied. A puzzling phenomenon in the current practice of deep learning is that models are trained with many more parameters than what this classical theory would suggest. We propose a theoretical explanation for this phenomenon. We prove that for a broad class of data distributions and model classes, overparametrization is necessary if one wants to interpolate the data smoothly. Namely we show that smooth interpolation requires \(d\) times more parameters than mere interpolation, where \(d\) is the ambient data dimension. We prove this universal law of robustness for any smoothly parametrized function class with polynomial size weights, and any covariate distribution verifying isoperimetry. In the case of two-layers neural networks and Gaussian covariates, this law was conjectured in prior work by Bubeck, Li and Nagaraj. We also give an interpretation of our result as an improved generalization bound for model classes consisting of smooth functions.
I am not sure if this is a distinct thing from other double descent phenomena. Hastie et al. (2020) suggests perhaps not?
Interpolators β estimators that achieve zero training error β have attracted growing attention in machine learning, mainly because state-of-the art neural networks appear to be models of this type. In this paper, we study minimum \(\ell_2\) norm (βridgelessβ) interpolation in high-dimensional least squares regression. We consider two different models for the feature distribution: a linear model, where the feature vectors \(x_i \in {\mathbb R}^p\) are obtained by applying a linear transform to a vector of i.i.d.Β entries, \(x_i = \Sigma^{1/2} z_i\) (with \(z_i \in {\mathbb R}^p\)); and a nonlinear model, where the feature vectors are obtained by passing the input through a random one-layer neural network, \(x_i = \varphi(W z_i)\) (with \(z_i \in {\mathbb R}^d\), \(W \in {\mathbb R}^{p \times d}\) a matrix of i.i.d.Β entries, and \(\varphi\) an activation function acting componentwise on \(W z_i\)). We recover β in a precise quantitative way β several phenomena that have been observed in large-scale neural networks and kernel machines, including the βdouble descentβ behavior of the prediction risk, and the potential benefits of overparametrization.
Lottery ticket hypothesis
The Lottery Ticket hypothesis (Frankle and Carbin 2019; Hayou et al. 2020) asserts something like βthere is a good compact network hidden inside the overparameterized one you haveβ. Intuitively it is computationally hard to find the hidden optimal network. I am interested in computational bounds for this; How much cheaper is it to calculate with a massive network than to find the tiny networks that does better?
In extremely large models
In the wide-network limit
See Wide NNs.
Convex relaxation
See convex relaxation.
No comments yet. Why not leave one?