Machine learning's gamified version of the replication crisis is a paper mill, or perhaps paper tradmill. In this system soemthing counts as “results” if it performs on some conventional benchmarks. But how often does that demonstrate real progress and how often is it overfitting to benchmarks?
Oleg Trott on How to sneak up competition leaderboards.
Blum, Avrim, and Moritz Hardt. 2015. “The Ladder: A Reliable Leaderboard for Machine Learning Competitions,” February. http://arxiv.org/abs/1502.04585.
Lathuilière, Stéphane, Pablo Mesejo, Xavier Alameda-Pineda, and Radu Horaud. 2020. “A Comprehensive Analysis of Deep Regression.” IEEE Transactions on Pattern Analysis and Machine Intelligence 42 (9): 2065–81. https://doi.org/10.1109/TPAMI.2019.2910523.
Musgrave, Kevin, Serge Belongie, and Ser-Nam Lim. 2020. “A Metric Learning Reality Check,” July. http://arxiv.org/abs/2003.08505.