Overparameterization

a.k.a. improper learning



Notes on the general technique of increasing the number of slack parameters you have, especially in machine learning, especially especially in neural nets.

For smoothness

This insight is fresh. Bubeck and Sellke (2021) argue

Classically, data interpolation with a parametrized model class is possible as long as the number of parameters is larger than the number of equations to be satisfied. A puzzling phenomenon in the current practice of deep learning is that models are trained with many more parameters than what this classical theory would suggest. We propose a theoretical explanation for this phenomenon. We prove that for a broad class of data distributions and model classes, overparametrization is necessary if one wants to interpolate the data smoothly. Namely we show that smooth interpolation requires \(d\) times more parameters than mere interpolation, where \(d\) is the ambient data dimension. We prove this universal law of robustness for any smoothly parametrized function class with polynomial size weights, and any covariate distribution verifying isoperimetry. In the case of two-layers neural networks and Gaussian covariates, this law was conjectured in prior work by Bubeck, Li and Nagaraj. We also give an interpretation of our result as an improved generalization bound for model classes consisting of smooth functions.

In the wide-network limit

See Wide NNs.

For making optimisation nice

The combination of overparameterization and SGD is argued to be the secret to how deep learning works, by Zeyuan Allen-Zhu, Yuanzhi Li and Zhao Song. Certainly, looking at how some classic optimizations can be lifted in to convex problems, we can imagine that something similar happens by analaogy here.

RJ Lipton discusses Arno van den Essen’s incidental work on stabilisation methods of polynomials, which relates. AFAICT, to transfer-function-type stability. Does this connect to the overparameterization of rational transfer function analysis of Hardt, Ma, and Recht (2018)? πŸ—.

Double descent

When adding data (or parameters?) can make the model worse. E.g. Deep Double Descent

Lottery ticket hypothesis

The Lottery Ticket hypothesis (Frankle and Carbin 2019; Hayou et al. 2020) asserts something like β€œthere is a good compact network hidden inside the overparameterized one you have”. Intuitively it is computationally hard to find the hidden optimal network. I am interested in computational bounds for this; How much cheaper is it to calculate with a massive network than to find the tiny networks that does better? The calculus here is altered by SIMD architectures such as GPUs, which change the relative cost (although not the scaling) of certain types of calculations, which is arguably how we got to the modern form of neural net obsession.

References

Allen-Zhu, Zeyuan, Yuanzhi Li, and Zhao Song. 2018. β€œA Convergence Theory for Deep Learning via Over-Parameterization,” November.
Arora, Sanjeev, Nadav Cohen, and Elad Hazan. 2018. β€œOn the Optimization of Deep Networks: Implicit Acceleration by Overparameterization.” arXiv:1802.06509 [Cs], February.
Bach, Francis. 2013. β€œConvex Relaxations of Structured Matrix Factorizations.” arXiv:1309.3117 [Cs, Math], September.
Bahmani, Sohail, and Justin Romberg. 2014. β€œLifting for Blind Deconvolution in Random Mask Imaging: Identifiability and Convex Relaxation.” arXiv:1501.00046 [Cs, Math, Stat], December.
β€”β€”β€”. 2016. β€œPhase Retrieval Meets Statistical Learning Theory: A Flexible Convex Relaxation.” arXiv:1610.04210 [Cs, Math, Stat], October.
Bartlett, Peter L., Andrea Montanari, and Alexander Rakhlin. 2021. β€œDeep Learning: A Statistical Viewpoint,” March.
Bubeck, Sebastien, and Mark Sellke. 2021. β€œA Universal Law of Robustness via Isoperimetry.” In.
Dziugaite, Gintare Karolina, and Daniel M. Roy. 2017. β€œComputing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters Than Training Data.” arXiv:1703.11008 [Cs], October.
Frankle, Jonathan, and Michael Carbin. 2019. β€œThe Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks.” arXiv:1803.03635 [Cs], March.
GΕ‚uch, Grzegorz, and RΓΌdiger Urbanke. 2021. β€œNoether: The More Things Change, the More Stay the Same.” arXiv:2104.05508 [Cs, Stat], April.
Goldstein, Tom, and Christoph Studer. 2016. β€œPhaseMax: Convex Phase Retrieval via Basis Pursuit.” arXiv:1610.07531 [Cs, Math], October.
Hardt, Moritz, Tengyu Ma, and Benjamin Recht. 2018. β€œGradient Descent Learns Linear Dynamical Systems.” The Journal of Machine Learning Research 19 (1): 1025–68.
Hasson, Uri, Samuel A. Nastase, and Ariel Goldstein. 2020. β€œDirect Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks.” Neuron 105 (3): 416–34.
Hayou, Soufiane, Jean-Francois Ton, Arnaud Doucet, and Yee Whye Teh. 2020. β€œPruning Untrained Neural Networks: Principles and Analysis.” arXiv:2002.08797 [Cs, Stat], June.
Hazan, Elad, Karan Singh, and Cyril Zhang. 2017. β€œLearning Linear Dynamical Systems via Spectral Filtering.” In NIPS.
Molchanov, Dmitry, Arsenii Ashukha, and Dmitry Vetrov. 2017. β€œVariational Dropout Sparsifies Deep Neural Networks.” In Proceedings of ICML.
Nakkiran, Preetum, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever. 2019. β€œDeep Double Descent: Where Bigger Models and More Data Hurt.” arXiv:1912.02292 [Cs, Stat], December.
Oliveira, MaurΓ­cio C. de, and Robert E. Skelton. 2001. β€œStability Tests for Constrained Linear Systems.” In Perspectives in Robust Control, 241–57. Lecture Notes in Control and Information Sciences. Springer, London.
Ran, Zhi-Yong, and Bao-Gang Hu. 2017. β€œParameter Identifiability in Statistical Machine Learning: A Review.” Neural Computation 29 (5): 1151–1203.
Semenova, Lesia, Cynthia Rudin, and Ronald Parr. 2021. β€œA Study in Rashomon Curves and Volumes: A New Perspective on Generalization and Model Simplicity in Machine Learning.” arXiv:1908.01755 [Cs, Stat], April.
Tropp, J.A. 2006. β€œJust Relax: Convex Programming Methods for Identifying Sparse Signals in Noise.” IEEE Transactions on Information Theory 52 (3): 1030–51.
You, Haoran, Chaojian Li, Pengfei Xu, Yonggan Fu, Yue Wang, Xiaohan Chen, Richard G. Baraniuk, Zhangyang Wang, and Yingyan Lin. 2019. β€œDrawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks.” In.
Zhang, Chiyuan, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. 2017. β€œUnderstanding Deep Learning Requires Rethinking Generalization.” In Proceedings of ICLR.
β€”β€”β€”. 2021. β€œUnderstanding Deep Learning (Still) Requires Rethinking Generalization.” Communications of the ACM 64 (3): 107–15.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.