Publication bias

Replication crises, P-values, bitching about journals, other debilities of contemporary science at large



Multiple testing across a whole scientific field, with a side helping of biased data release and terrible incentives.

On one hand we hope that journals will help us find things that are relevant. On the other hand, we would hope the things they help us find are actually true. It’s not at all obvious how to solve these kind of classification problems economically, but we kind of hope that peer review does it.

To read: My likelihood depends on your frequency properties.

Keywords: “file-drawer process” and the “publication sieve”, which are the large-scale models of how this works in a scientific community and “researcher degrees of freedom” which is the model for how this works at the individual scale.

This is particularly pertinent in social psychology, where it turns out the there is too much bullshit with \(P\leq 0.05\).

We’re out here everyday, doing the dirty work finding noise and then polishing it into the hypotheses everyone loves. It’s not easy. —John Schmidt, The noise miners

Sanjay Srivastava, Everything is fucked, the syllabus.

On the easier problem of local theories

On the other hand, we can all agree that finding small-effect universal laws in messy domains like human society is a hard problem. In machine learning we frequently give up on that and just try to solve a local problem — does this work in this domain with enough certainty to help this problem? Then we still need to solve a problem about domain adaptation when we try to work out if we are still working on this problem, or at least one similar enough to this. But that feels like it might be easier by virtue of being less ambitious.

References

Gabry, Jonah, Daniel Simpson, Aki Vehtari, Michael Betancourt, and Andrew Gelman. 2019. Visualization in Bayesian Workflow.” Journal of the Royal Statistical Society: Series A (Statistics in Society) 182 (2): 389–402.
Gelman, Andrew, and Cosma Rohilla Shalizi. 2013. Philosophy and the Practice of Bayesian Statistics.” British Journal of Mathematical and Statistical Psychology 66 (1): 8–38.
McShane, Blakeley B., David Gal, Andrew Gelman, Christian Robert, and Jennifer L. Tackett. 2019. Abandon Statistical Significance.” The American Statistician 73 (sup1): 235–45.
Nissen, Silas B., Tali Magidson, Kevin Gross, and Carl T. Bergstrom. 2016. Publication Bias and the Canonization of False Facts.” arXiv:1609.00494 [Physics, Stat], September.
Ritchie, Stuart. 2020. Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth. First edition. New York: Metropolitan Books ; Henry Holt and Company.
Simmons, Joseph P., Leif D. Nelson, and Uri Simonsohn. 2011. False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant.” Psychological Science 22 (11): 1359–66.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.