ML benchmarks and their pitfalls

On marginal efficiency gain in paperclip manufacture

August 16, 2020 — October 15, 2024

economics
game theory
how do science
incentive mechanisms
institutions
machine learning
neural nets
statistics
Your baseline is shooting up my spine
Your baseline
Your baseline has got me feeling fine
It’s filling up my mind

with apologies to Puretone
Figure 1

Machine learning’s gamified/Goodharted version of the replication crisis is the paper treadmill wherein something counts as a “novel result” if it performs on some conventional benchmarks. But how often does that demonstrate real progress and how often is it overfitting to benchmarks?

Oleg Trott on How to sneak up competition leaderboards.

Jörn-Henrik Jacobsen, Robert Geirhos, Claudio Michaelis: Shortcuts: How Neural Networks Love to Cheat.

Sanjeev Arora, Yi Zhang, Rip van Winkle’s Razor, a Simple New Estimate for Adaptive Data Analysis has a minimum description length approach to model meta-overfitting which I will not summarize except to recommend it for being extremely psychedelic.

1 Explicit connection to Goodhart’s law

Goodhart’s law.

Filip Piekniewski on the tendency to select bad target losses for convenience. Measuring Goodhart’s Law at OpenAI.

2 Measuring speed

Lots of algorithms claim to go fast, but that is a complicated claim on modern hardware. Stabilizer attempts to randomise things to give a “fair” comparison.

3 Incoming

MLE-bench: Evaluating Machine Learning Agents on Machine Learning Engineering | OpenAI

We introduce MLE-bench, a benchmark for measuring how well AI agents perform at machine learning engineering. To this end, we curate 75 ML engineering-related competitions from Kaggle, creating a diverse set of challenging tasks that test real-world ML engineering skills such as training models, preparing datasets, and running experiments. We establish human baselines for each competition using Kaggle’s publicly available leaderboards. We use open-source agent scaffolds to evaluate several frontier language models on our benchmark, finding that the best-performing setup—OpenAI’s o1-preview with AIDE scaffolding—achieves at least the level of a Kaggle bronze medal in 16.9% of competitions. In addition to our main results, we investigate various forms of resource-scaling for AI agents and the impact of contamination from pre-training. We open-source our benchmark code (opens in a new window) to facilitate future research in understanding the ML engineering capabilities of AI agents.

4 References

Arora, and Zhang. 2021. Rip van Winkle’s Razor: A Simple Estimate of Overfit to Test Data.” arXiv:2102.13189 [Cs, Stat].
Blum, and Hardt. 2015. The Ladder: A Reliable Leaderboard for Machine Learning Competitions.” arXiv:1502.04585 [Cs].
Brockman, Cheung, Pettersson, et al. 2016. OpenAI Gym.” arXiv:1606.01540 [Cs].
Fleming, and Wallace. 1986. How Not to Lie with Statistics: The Correct Way to Summarize Benchmark Results.” Communications of the ACM.
Geirhos, Jacobsen, Michaelis, et al. 2020. Shortcut Learning in Deep Neural Networks.” arXiv:2004.07780 [Cs, q-Bio].
Hutson. 2022. Taught to the Test.” Science.
Hyndman. 2020. A Brief History of Forecasting Competitions.” International Journal of Forecasting, M4 Competition,.
Koch, and Peterson. 2024. From Protoscience to Epistemic Monoculture: How Benchmarking Set the Stage for the Deep Learning Revolution.”
Lathuilière, Mesejo, Alameda-Pineda, et al. 2020. A Comprehensive Analysis of Deep Regression.” IEEE Transactions on Pattern Analysis and Machine Intelligence.
Liu, Miao, Zhan, et al. 2019. Large-Scale Long-Tailed Recognition in an Open World.” In.
Lones. 2021. How to Avoid Machine Learning Pitfalls: A Guide for Academic Researchers.”
Makridakis, Spiliotis, and Assimakopoulos. 2020. The M4 Competition: 100,000 Time Series and 61 Forecasting Methods.” International Journal of Forecasting, M4 Competition,.
Mitchell, Wu, Zaldivar, et al. 2019. Model Cards for Model Reporting.” In Proceedings of the Conference on Fairness, Accountability, and Transparency. FAT* ’19.
Musgrave, Belongie, and Lim. 2020. A Metric Learning Reality Check.” arXiv:2003.08505 [Cs].
Mytkowicz, Diwan, Hauswirth, et al. 2009. Producing Wrong Data Without Doing Anything Obviously Wrong! In Proceedings of the 14th International Conference on Architectural Support for Programming Languages and Operating Systems. ASPLOS XIV.
Olson, La Cava, Orzechowski, et al. 2017. PMLB: A Large Benchmark Suite for Machine Learning Evaluation and Comparison.” BioData Mining.
Raji, Bender, Paullada, et al. 2021. AI and the Everything in the Whole Wide World Benchmark.”
v. Kistowski, Arnold, Huppler, et al. 2015. How to Build a Benchmark.” In Proceedings of the 6th ACM/SPEC International Conference on Performance Engineering. ICPE ’15.