ML benchmarks and their pitfalls

On marginal efficiency gain in paperclip manufacture



Machine learning’s gamified/Goodharted version of the replication crisis is a paper mill, or perhaps paper treadmill. In this system something counts as “results” if it performs on some conventional benchmarks. But how often does that demonstrate real progress and how often is it overfitting to benchmarks?

Oleg Trott on How to sneak up competition leaderboards.

Jörn-Henrik Jacobsen, Robert Geirhos, Claudio Michaelis: Shortcuts: How Neural Networks Love to Cheat.

Sanjeev Arora, Yi Zhang, Rip van Winkle’s Razor, a Simple New Estimate for Adaptive Data Analysis has a minimum description length approach to model meta-overfitting which i will not summarize except to recommend it for being extremely psychedelic.

Explicit connection to Goodhart’s law

Goodhart’s law.

Filip Piekniewski on the tendency to select bad target losses for convenience. Measuring Goodhart’s Law at OpenAI.

Measuring speed

Lots of algorithms claim to go fast, but that is a complicated claim on modern hardware. Stabilizer attempts to randomise things to give a “fair” comparison.

References

Arora, Sanjeev, and Yi Zhang. 2021. Rip van Winkle’s Razor: A Simple Estimate of Overfit to Test Data.” arXiv:2102.13189 [Cs, Stat], February.
Blum, Avrim, and Moritz Hardt. 2015. The Ladder: A Reliable Leaderboard for Machine Learning Competitions.” arXiv:1502.04585 [Cs], February.
Brockman, Greg, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. 2016. OpenAI Gym.” arXiv:1606.01540 [Cs], June.
Fleming, Philip J., and John J. Wallace. 1986. How Not to Lie with Statistics: The Correct Way to Summarize Benchmark Results.” Communications of the ACM 29 (3): 218–21.
Geirhos, Robert, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A. Wichmann. 2020. Shortcut Learning in Deep Neural Networks.” arXiv:2004.07780 [Cs, q-Bio], April.
Hutson, Matthew. 2022. Taught to the Test.” Science 376 (6593): 570–73.
Hyndman, Rob J. 2020. A Brief History of Forecasting Competitions.” International Journal of Forecasting, M4 Competition, 36 (1): 7–14.
Kistowski, Jóakim v., Jeremy A. Arnold, Karl Huppler, Klaus-Dieter Lange, John L. Henning, and Paul Cao. 2015. How to Build a Benchmark.” In Proceedings of the 6th ACM/SPEC International Conference on Performance Engineering, 333–36. ICPE ’15. New York, NY, USA: Association for Computing Machinery.
Lathuilière, Stéphane, Pablo Mesejo, Xavier Alameda-Pineda, and Radu Horaud. 2020. A Comprehensive Analysis of Deep Regression.” IEEE Transactions on Pattern Analysis and Machine Intelligence 42 (9): 2065–81.
Lones, Michael A. 2021. How to Avoid Machine Learning Pitfalls: A Guide for Academic Researchers,” August.
Makridakis, Spyros, Evangelos Spiliotis, and Vassilios Assimakopoulos. 2020. The M4 Competition: 100,000 Time Series and 61 Forecasting Methods.” International Journal of Forecasting, M4 Competition, 36 (1): 54–74.
Musgrave, Kevin, Serge Belongie, and Ser-Nam Lim. 2020. A Metric Learning Reality Check.” arXiv:2003.08505 [Cs], July.
Mytkowicz, Todd, Amer Diwan, Matthias Hauswirth, and Peter F. Sweeney. 2009. Producing Wrong Data Without Doing Anything Obviously Wrong! In Proceedings of the 14th International Conference on Architectural Support for Programming Languages and Operating Systems, 265–76. ASPLOS XIV. New York, NY, USA: Association for Computing Machinery.
Olson, Randal S., William La Cava, Patryk Orzechowski, Ryan J. Urbanowicz, and Jason H. Moore. 2017. PMLB: A Large Benchmark Suite for Machine Learning Evaluation and Comparison.” BioData Mining 10 (1): 36.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.