ML benchmarks and their pitfalls

On marginal efficiency gain in paperclip manufacture



Machine learning’s gamified version of the replication crisis is a paper mill, or perhaps paper tradmill. In this system soemthing counts as “results” if it performs on some conventional benchmarks. But how often does that demonstrate real progress and how often is it overfitting to benchmarks?

Oleg Trott on How to sneak up competition leaderboards.

Filip Piekniewski on the tendency to select bad target losses for convenience, which he analyses as a flavour of Goodhart’s law.

Jörn-Henrik Jacobsen, Robert Geirhos, Claudio Michaelis: Shortcuts: How Neural Networks Love to Cheat.

Sanjeev Arora, Yi Zhang, Rip van Winkle’s Razor, a Simple New Estimate for Adaptive Data Analysis has a minimum description length approach to model meta-overfitting which i will not summarize here except to recommend it for being extremely psychedelic.

References

Arora, Sanjeev, and Yi Zhang. 2021. “Rip van Winkle’s Razor: A Simple Estimate of Overfit to Test Data.” February 25, 2021. http://arxiv.org/abs/2102.13189.
Blum, Avrim, and Moritz Hardt. 2015. “The Ladder: A Reliable Leaderboard for Machine Learning Competitions.” February 16, 2015. http://arxiv.org/abs/1502.04585.
Geirhos, Robert, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A. Wichmann. 2020. “Shortcut Learning in Deep Neural Networks.” April 16, 2020. http://arxiv.org/abs/2004.07780.
Lathuilière, Stéphane, Pablo Mesejo, Xavier Alameda-Pineda, and Radu Horaud. 2020. “A Comprehensive Analysis of Deep Regression.” IEEE Transactions on Pattern Analysis and Machine Intelligence 42 (9): 2065–81. https://doi.org/10.1109/TPAMI.2019.2910523.
Musgrave, Kevin, Serge Belongie, and Ser-Nam Lim. 2020. “A Metric Learning Reality Check.” July 23, 2020. http://arxiv.org/abs/2003.08505.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.