Stability (in learning)



Your estimate is robust to a deleted data point? it is a stable estimate. This implies generalisability, apparently. The above statements can be made precise, I am told. Making them precise might give us new ideas for risk bounds, model selection, or connections to optimization.

Supposedly there is also connection to differential privacy, but since I don’t yet know anything about differential privacy I can’t take that statement any further except to note I would like to work it out one day. This is also presumably a connection to robust inference, since the setup sounds similar.

scrapbook:

Yu (2013):

Reproducibility is imperative for any scientific discovery. More often than not, modern scientific findings rely on statistical analysis of high-dimensional data. At a minimum, reproducibility manifests itself in stability of statistical results relative to β€œreasonable” perturbations to data and to the model used. Jacknife, bootstrap, and cross-validation are based on perturbations to data, while robust statistics methods deal with perturbations to models.

Moritz Hardt, Stability as a foundation of machine learning:

Central to machine learning is our ability to relate how a learning algorithm fares on a sample to its performance on unseen instances. This is called generalization.

In this post, I will describe a purely algorithmic approach to generalization. The property that makes this possible is stability. An algorithm is stable, intuitively speaking, if its output doesn’t change much if we perturb the input sample in a single point. We will see that this property by itself is necessary and sufficient for generalization.

Xu, Caramanis, and Mannor (2012):

We consider two desired properties of learning algorithms: sparsity and algorithmic stability. Both properties are believed to lead to good generalization ability. We show that these two properties are fundamentally at odds with each other: A sparse algorithm cannot be stable and vice versa. Thus, one has to trade off sparsity and stability in designing a learning algorithm. In particular, our general result implies that β„“1-regularized regression (Lasso) cannot be stable, while β„“2-regularized regression is known to have strong stability properties and is therefore not sparse.

References

Agarwal, Shivani, and Partha Niyogi. 2009. β€œGeneralization Bounds for Ranking Algorithms via Algorithmic Stability.” In Journal of Machine Learning Research, 10:441–74.
Bassily, Raef, Kobbi Nissim, Adam Smith, Thomas Steinke, Uri Stemmer, and Jonathan Ullman. 2015. β€œAlgorithmic Stability for Adaptive Data Analysis.” arXiv:1511.02513 [Cs], November.
Bousquet, Olivier, and AndrΓ© Elisseeff. 2001. β€œAlgorithmic Stability and Generalization Performance.” In Advances in Neural Information Processing Systems, 13:196–202. MIT Press.
β€”β€”β€”. 2002. β€œStability and Generalization.” Journal of Machine Learning Research 2 (Mar): 499–526.
Chandramoorthy, Nisha, Andreas Loukas, Khashayar Gatmiry, and Stefanie Jegelka. 2022. β€œOn the Generalization of Learning Algorithms That Do Not Converge.” arXiv.
Freeman, R.A., Peng Yang, and K.M. Lynch. 2006. β€œStability and Convergence Properties of Dynamic Average Consensus Estimators.” In 2006 45th IEEE Conference on Decision and Control, 338–43. San Diego, CA, USA: IEEE.
Giryes, Raja, Guillermo Sapiro, and Alex M. Bronstein. 2014. β€œOn the Stability of Deep Networks.” arXiv:1412.5896 [Cs, Math, Stat], December.
Hardt, Moritz, Tengyu Ma, and Benjamin Recht. 2018. β€œGradient Descent Learns Linear Dynamical Systems.” The Journal of Machine Learning Research 19 (1): 1025–68.
Hardt, Moritz, Benjamin Recht, and Yoram Singer. 2015. β€œTrain Faster, Generalize Better: Stability of Stochastic Gradient Descent.” arXiv:1509.01240 [Cs, Math, Stat], September.
Kutin, Samuel, and Partha Niyogi. 2002. β€œAlmost-Everywhere Algorithmic Stability and Generalization Error.” In Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence, 275–82. UAI’02. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.
Liu, Han, Kathryn Roeder, and Larry Wasserman. 2010. β€œStability Approach to Regularization Selection (StARS) for High Dimensional Graphical Models.” In Advances in Neural Information Processing Systems 23, edited by J. D. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, and A. Culotta, 1432–40. Curran Associates, Inc.
Meinshausen, Nicolai, and Peter BΓΌhlmann. 2010. β€œStability Selection.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 72 (4): 417–73.
Meng, Qi, Yue Wang, Wei Chen, Taifeng Wang, Zhi-Ming Ma, and Tie-Yan Liu. 2016. β€œGeneralization Error Bounds for Optimization Algorithms via Stability.” In arXiv:1609.08397 [Stat], 10:441–74.
Xu, H., C. Caramanis, and S. Mannor. 2012. β€œSparse Algorithms Are Not Stable: A No-Free-Lunch Theorem.” IEEE Transactions on Pattern Analysis and Machine Intelligence 34 (1): 187–93.
Yu, Bin. 2013. β€œStability.” Bernoulli 19 (4): 1484–1500.
Zhang, Chiyuan, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. 2017. β€œUnderstanding Deep Learning Requires Rethinking Generalization.” In Proceedings of ICLR.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.