ML benchmarks and their pitfalls

Psychometrics for robots

2020-08-15 — 2025-05-23

economics
game theory
how do science
incentive mechanisms
institutions
machine learning
neural nets
statistics
Your baseline is shooting up my spine
Your baseline
Your baseline has got me feeling fine
It’s filling up my mind

with apologies to Puretone
Figure 1

Machine learning’s gamified/Goodharted version of the replication crisis is the paper treadmill where something counts as a “novel result” if it performs on some conventional benchmarks. But how often does that show real progress and how often is it just overfitting to benchmarks?

1 As methodology

What is a benchmark in the scientific method? I expect the Moritz Hardt textbook The Emerging Science of Machine Learning Benchmarks to be the definitive reference on this topic.

From the Introduction:

In developing benchmarks, pattern recognition discovered its own instance of the iron rule of modern science. A term coined by the philosopher Michael Strevens, the iron rule asserts that all disputes between scientists must ultimately be settled by competitive empirical testing. In this view, modern scientific communities organize around empirical protocols that lay out the rules of scientific competition. These are a lot like the rules in a sporting competition. Scientists are free to think and do whatever they want, but for the purposes of scientific competition, they stick to the rules.

The iron rule makes a virtue out of what might seem like a problem: relentless competition among scientists. By making empirical testing the objective, scientists accumulate knowledge as they compete. Scientific institutions—funding agencies, journals, and universities alike—reinforce the rule by rewarding those who come out ahead in the metrics. Deciding who gets what via empirical testing lowers friction in the gears of science, as it seems to avoid drawn-out debate and keeps personal opinions in check. What results, Strevens argues, is an efficient knowledge machine that powers modern science.

Benchmarks are the iron rule of machine learning research and a radically simple contract at that: Anything goes on the training set, competitive ranking on the test set. The recipe is simple. What’s surprisingly hard is to explain why and when it should work as an engine of progress.

2 AGI benchmarks

e.k.a. evals.

Benchmarks are important tools for tracking the rapid advancements in large language model (LLM) capabilities. However, benchmarks are not keeping pace in difficulty: LLMs now achieve over 90% accuracy on popular benchmarks like MMLU, limiting informed measurement of state-of-the-art LLM capabilities. In response, we introduce Humanity’s Last Exam, a multi-modal benchmark at the frontier of human knowledge, designed to be the final closed-ended academic benchmark of its kind with broad subject coverage. The dataset consists of 2,700 challenging questions across over a hundred subjects. We publicly release these questions, while maintaining a private test set of held-out questions to assess model overfitting.

Duchnowski, Pavlick, and Koller ():

We introduce the dataset of Everyday Hard Optimization Problems (EHOP), a collection of NP-hard optimization problems expressed in natural language. EHOP includes problem formulations that could be found in computer science textbooks, versions that are dressed up as problems that could arise in real life, and variants of well-known problems with inverted rules. We find that state-of-the-art LLMs, across multiple prompting strategies, systematically solve textbook problems more accurately than their real-life and inverted counterparts. We argue that this evidence shows LLMs adapt solutions seen during training, rather than leveraging reasoning abilities to generalise to novel problems.

3 Gaming, shortcuts

Oleg Trott on How to sneak up competition leaderboards.

Jörn-Henrik Jacobsen, Robert Geirhos, Claudio Michaelis: Shortcuts: How Neural Networks Love to Cheat.

Sanjeev Arora, Yi Zhang, Rip van Winkle’s Razor, a Simple New Estimate for Adaptive Data Analysis has a minimum description length approach to model meta-overfitting which I will not summarise except to recommend for being extremely psychedelic.

4 Algorithm selection framework

Idea: instead of finding the “best” algorithm, find the best algorithm for a given problem. This is the Algorithm Selection Problem ().

A new variant of this is Instance Space Analysis, which is a way of looking at the performance of algorithms across a range of problems Smith-Miles et al. ():

This paper tackles the difficult but important task of objective algorithm performance assessment for optimisation. Rather than reporting average performance of algorithms across a set of chosen instances, which may bias conclusions, we propose a methodology to enable the strengths and weaknesses of different optimisation algorithms to be compared across a broader instance space. The results reported in a recent Computers and Operations Research paper comparing the performance of graph colouring heuristics are revisited with this new methodology to demonstrate (i) how pockets of the instance space can be found where algorithm performance varies significantly from the average performance of an algorithm; (ii) how the properties of the instances can be used to predict algorithm performance on previously unseen instances with high accuracy; and (iii) how the relative strengths and weaknesses of each algorithm can be visualised and measured objectively.

See Instance Space Analysis for Rigorous and Insightful Algorithm Testing and Smith-Miles and Muñoz ().

4.1 Goodhart’s law in particular

Goodhart’s law.

Filip Piekniewski on the tendency to select bad target losses for convenience. Measuring Goodhart’s Law at OpenAI.

5 Measuring speed

Lots of algorithms claim to go fast, but that is a complicated claim on modern hardware. Stabilizer attempts to randomise things to give a “fair” comparison.

6 Performativity

There is a theory which states that if ever anyone discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable. There is another theory which states that this has already happened.

– Douglas Adams, The Restaurant at the End of the Universe

7 Incoming

MLE-bench: Evaluating Machine Learning Agents on Machine Learning Engineering | OpenAI

We introduce MLE-bench, a benchmark for measuring how well AI agents perform at machine learning engineering. To this end, we curate 75 ML engineering-related competitions from Kaggle, creating a diverse set of challenging tasks that test real-world ML engineering skills such as training models, preparing datasets, and running experiments. We establish human baselines for each competition using Kaggle’s publicly available leaderboards. We use open-source agent scaffolds to evaluate several frontier language models on our benchmark, finding that the best-performing setup—OpenAI’s o1-preview with AIDE scaffolding—achieves at least the level of a Kaggle bronze medal in 16.9% of competitions. In addition to our main results, we investigate various forms of resource-scaling for AI agents and the impact of contamination from pre-training. We open-source our benchmark code (opens in a new window) to facilitate future research in understanding the ML engineering capabilities of AI agents.

8 References

Arora, and Zhang. 2021. Rip van Winkle’s Razor: A Simple Estimate of Overfit to Test Data.” arXiv:2102.13189 [Cs, Stat].
Blum, and Hardt. 2015. The Ladder: A Reliable Leaderboard for Machine Learning Competitions.” arXiv:1502.04585 [Cs].
Brockman, Cheung, Pettersson, et al. 2016. OpenAI Gym.” arXiv:1606.01540 [Cs].
Casenave, Roynard, Staber, et al. 2025. Physics-Learning AI Datamodel (PLAID) Datasets: A Collection of Physics Simulations for Machine Learning.”
Duchnowski, Pavlick, and Koller. 2025. EHOP: A Dataset of Everyday NP-Hard Optimization Problems.”
Fleming, and Wallace. 1986. How Not to Lie with Statistics: The Correct Way to Summarize Benchmark Results.” Communications of the ACM.
Geirhos, Jacobsen, Michaelis, et al. 2020. Shortcut Learning in Deep Neural Networks.” arXiv:2004.07780 [Cs, q-Bio].
Hardt. 2025. The Emerging Science of Machine Learning Benchmarks.
Hutson. 2022. Taught to the Test.” Science.
Hyndman. 2020. A Brief History of Forecasting Competitions.” International Journal of Forecasting, M4 Competition,.
Koch, and Peterson. 2024. From Protoscience to Epistemic Monoculture: How Benchmarking Set the Stage for the Deep Learning Revolution.”
Lathuilière, Mesejo, Alameda-Pineda, et al. 2020. A Comprehensive Analysis of Deep Regression.” IEEE Transactions on Pattern Analysis and Machine Intelligence.
Liu, Miao, Zhan, et al. 2019. Large-Scale Long-Tailed Recognition in an Open World.” In.
Lones. 2021. How to Avoid Machine Learning Pitfalls: A Guide for Academic Researchers.”
Makridakis, Spiliotis, and Assimakopoulos. 2020. The M4 Competition: 100,000 Time Series and 61 Forecasting Methods.” International Journal of Forecasting, M4 Competition,.
Mitchell, Wu, Zaldivar, et al. 2019. Model Cards for Model Reporting.” In Proceedings of the Conference on Fairness, Accountability, and Transparency. FAT* ’19.
Musgrave, Belongie, and Lim. 2020. A Metric Learning Reality Check.” arXiv:2003.08505 [Cs].
Mytkowicz, Diwan, Hauswirth, et al. 2009. Producing Wrong Data Without Doing Anything Obviously Wrong! In Proceedings of the 14th International Conference on Architectural Support for Programming Languages and Operating Systems. ASPLOS XIV.
Olson, La Cava, Orzechowski, et al. 2017. PMLB: A Large Benchmark Suite for Machine Learning Evaluation and Comparison.” BioData Mining.
Raji, Bender, Paullada, et al. 2021. AI and the Everything in the Whole Wide World Benchmark.”
Rice. 1976. The Algorithm Selection Problem.” Advances in Computers.
Smith-Miles, Baatar, Wreford, et al. 2014. Towards Objective Measures of Algorithm Performance Across Instance Space.” Computers & Operations Research.
Smith-Miles, and Muñoz. 2023. Instance Space Analysis for Algorithm Testing: Methodology and Software Tools.” ACM Computing Surveys.
Suganthan, Hansen, Liang, et al. n.d. “Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on Real-Parameter Optimization.”
v. Kistowski, Arnold, Huppler, et al. 2015. How to Build a Benchmark.” In Proceedings of the 6th ACM/SPEC International Conference on Performance Engineering. ICPE ’15.