Overparameterization in large models

Improper learning, benign overfitting, double descent

April 4, 2018 — October 29, 2024

decision theory
feature construction
machine learning
model selection
neural nets
optimization
probabilistic algorithms
SDEs
statmech
stochastic processes

Notes on the general weird behaviour of increasing the number of slack parameters we use, especially in machine learning, especially in neural nets. Most of these have far more parameters than we “need,” which is a problem for classical models of learning. Herein we learn to fear having too many parameters.

Figure 1

1 For making optimisation nice

Certainly, looking at how some classic non-convex optimization problems can be lifted into convex problems by adding slack variables, we can imagine that something similar happens by analogy in neural nets. Is it enough to imagine that something similar happens in NN, perhaps not lifting them into convex problems per se but at least into better-behaved optimisations in some sense?

The combination of overparameterization and SGD is argued to be the secret to how deep learning works, by e.g. AllenZhuConvergence2018.

RJ Lipton discusses Arno van den Essen’s incidental work on stabilisation methods of polynomials, which relates, AFAICT, to transfer-function-type stability. Does this connect to the overparameterization of rational transfer function analysis of Hardt, Ma, and Recht (2018)? 🏗.

2 Double descent

When adding data (or parameters?) can make the model worse. E.g. Deep Double Descent.

Possibly this phenomenon relates to the concept of data interpolation, although see Resolution of misconception of overfitting: Differentiating learning curves from Occam curves.

3 Data interpolation

a.k.a. benign overfitting. See interpolation/extrapolation in NNs.

4 Lottery ticket hypothesis

Figure 2

The Lottery Ticket hypothesis (Frankle and Carbin 2019; Hayou et al. 2020) asserts something like “there is a good compact network hidden inside the overparameterized one you have.” Intuitively it is computationally hard to find the hidden optimal network. I am interested in computational bounds for this; How much cheaper is it to calculate with a massive network than to find the tiny networks that do better?

5 In extremely large models

See neural nets at scale

6 In the wide-network limit

See Wide NNs.

7 Convex relaxation

See convex relaxation.

8 In weight space versus in function space

See NNs in function space.

9 References

Allen-Zhu, Li, and Song. 2018. A Convergence Theory for Deep Learning via Over-Parameterization.”
Arora, Cohen, and Hazan. 2018. On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization.” arXiv:1802.06509 [Cs].
Bach. 2013. Convex Relaxations of Structured Matrix Factorizations.” arXiv:1309.3117 [Cs, Math].
Bahmani, and Romberg. 2014. Lifting for Blind Deconvolution in Random Mask Imaging: Identifiability and Convex Relaxation.” arXiv:1501.00046 [Cs, Math, Stat].
———. 2016. Phase Retrieval Meets Statistical Learning Theory: A Flexible Convex Relaxation.” arXiv:1610.04210 [Cs, Math, Stat].
Bartlett, Montanari, and Rakhlin. 2021. Deep Learning: A Statistical Viewpoint.” Acta Numerica.
Bubeck, and Sellke. 2021. A Universal Law of Robustness via Isoperimetry.” In.
Dziugaite, and Roy. 2017. Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters Than Training Data.” arXiv:1703.11008 [Cs].
Frankle, and Carbin. 2019. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks.” arXiv:1803.03635 [Cs].
Głuch, and Urbanke. 2021. Noether: The More Things Change, the More Stay the Same.” arXiv:2104.05508 [Cs, Stat].
Goldstein, and Studer. 2016. PhaseMax: Convex Phase Retrieval via Basis Pursuit.” arXiv:1610.07531 [Cs, Math].
Hardt, Ma, and Recht. 2018. Gradient Descent Learns Linear Dynamical Systems.” The Journal of Machine Learning Research.
Hasson, Nastase, and Goldstein. 2020. Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks.” Neuron.
Hastie, Montanari, Rosset, et al. 2020. Surprises in High-Dimensional Ridgeless Least Squares Interpolation.”
Hayou, Ton, Doucet, et al. 2020. Pruning Untrained Neural Networks: Principles and Analysis.” arXiv:2002.08797 [Cs, Stat].
Hazan, Singh, and Zhang. 2017. Learning Linear Dynamical Systems via Spectral Filtering.” In NIPS.
Molchanov, Ashukha, and Vetrov. 2017. Variational Dropout Sparsifies Deep Neural Networks.” In Proceedings of ICML.
Nakkiran, Kaplun, Bansal, et al. 2019. Deep Double Descent: Where Bigger Models and More Data Hurt.” arXiv:1912.02292 [Cs, Stat].
Oliveira, and Skelton. 2001. Stability Tests for Constrained Linear Systems.” In Perspectives in Robust Control. Lecture Notes in Control and Information Sciences.
Ran, and Hu. 2017. Parameter Identifiability in Statistical Machine Learning: A Review.” Neural Computation.
Semenova, Rudin, and Parr. 2021. A Study in Rashomon Curves and Volumes: A New Perspective on Generalization and Model Simplicity in Machine Learning.” arXiv:1908.01755 [Cs, Stat].
Tropp. 2006. Just Relax: Convex Programming Methods for Identifying Sparse Signals in Noise.” IEEE Transactions on Information Theory.
You, Li, Xu, et al. 2019. Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks.” In.
Zhang, Bengio, Hardt, et al. 2017. Understanding Deep Learning Requires Rethinking Generalization.” In Proceedings of ICLR.
———, et al. 2021. Understanding Deep Learning (Still) Requires Rethinking Generalization.” Communications of the ACM.