Science; Sociology and institution design for

Scientist, falsify thyself



Upon the thing that I presume academic publishing is supposed to do: further science. Reputation system and other mechanisms for trust in science, a.k.a. collective knowledge for reality itself.

I would like to consider the system of peer review, networking, conferencing, publishing and acclaim and see how closely it approximates an ideal system for uncovering truth, and further, imagine how we could make a better system. But I do not do that right now, I just collect some provocative links to that theme, in hope of time for more thought later.

Vesaelius pioneers scientific review by peering Used under licence from the University of Basel

Open review processes, practical

pubpeer (who are behind peeriodicals) produces a peer-review overlay for web browsers to spread their commentary and peer critique more widely. The site is itself brusquely confusing, but well blogged; you’ll get the idea. They are not afraid of invective, and I thought they looked more amateurish than effective. But I was wrong; they are quite selective and they seem to be near the best elective peer review today.🎶 This system been implicated in topical high-profile retractions (e.g. 1 2.

Mathematical models of the reviewing process

e.g. Cole, Jr, and Simon (1981); Lindsey (1988); Ragone et al. (2013); Shah et al. (2016); Whitehurst (1984).

The experimental data from Neurips experiments is intersting too. See e.g. Shah et al. (2016) or a blog post on the 2014 experiment (1, 2).

Economics of publishing

See academic publishing.

Mechanism desoign for peer review process

There is some fun mechanism design in this, e.g. Charlin and Zemel (2013); Gasparyan et al. (2015); Jan (n.d.); Merrifield and Saari (2009); Solomon (2007); Xiao, Dörfler, and van der Schaar (2014); Xu, Zhao, and Shi (n.d.).

Here is an interesting edge case in peer review and scientific reputation. Adam Becker, Junk Science or the Real Thing? ‘Inference’ Publishes Both. As far as I’m concerned, publishing crap in itself is not a catastrophic. A process that fails to discourage crap would be bad.

How well does academia gatekeep?

(Baldwin 2018):

This essay traces the history of refereeing at specialist scientific journals and at funding bodies and shows that it was only in the late twentieth century that peer review came to be seen as a process central to scientific practice. Throughout the nineteenth century and into much of the twentieth, external referee reports were considered an optional part of journal editing or grant making. The idea that refereeing is a requirement for scientific legitimacy seems to have arisen first in the Cold War United States. In the 1970s, in the wake of a series of attacks on scientific funding, American scientists faced a dilemma: there was increasing pressure for science to be accountable to those who funded it, but scientists wanted to ensure their continuing influence over fundingonline decisions. Scientists and their supporters cast expert refereeing—or “peer review,” as it was increasingly called—as the crucial process that ensured the credibility of science as a whole. Taking funding decisions out of expert hands, they argued, would be a corruption of science itself. This public elevation of peer review both reinforced and spread the belief that only peer-reviewed science was scientifically legitimate.

Thomas Basbøll says

It is commonplace today to talk about “knowledge production” and the university as a site of innovation. But the institution was never designed to “produce” something nor even to be especially innovative. Its function was to conserve what we know. It just happens to be in the nature of knowledge that it cannot be conserved if it does not grow.

Andrew Marzoni, Academia is a cult. Adam Becker on the assumptions and pathologies revealed by Wolfram’s latest branding and positioning:

So why did Wolfram announce his ideas this way? Why not go the traditional route? “I don’t really believe in anonymous peer review,” he says. “I think it’s corrupt. It’s all a giant story of somewhat corrupt gaming, I would say. I think it’s sort of inevitable that happens with these very large systems. It’s a pity.”

So what are Wolfram’s goals? He says he wants the attention and feedback of the physics community. But his unconventional approach—soliciting public comments on an exceedingly long paper—almost ensures it shall remain obscure. Wolfram says he wants physicists’ respect. The ones consulted for this story said gaining it would require him to recognize and engage with the prior work of others in the scientific community.

And when provided with some of the responses from other physicists regarding his work, Wolfram is singularly unenthused. “I’m disappointed by the naivete of the questions that you’re communicating,” he grumbles. “I deserve better.”

Style guide for reviews and rebuttals

See scientific writing

References

a Literal Banana. 2020. “Extended Sniff Test,” 7.
Afonso, Alexandre. 2013. “How Academia Resembles a Drug Gang.” Impact of Social Sciences (blog). December 11, 2013. http://blogs.lse.ac.uk/impactofsocialsciences/2013/12/11/how-academia-resembles-a-drug-gang/.
Agassi, Joseph. 1974. “The Logic of Scientific Inquiry.” Synthese 26: 498–514. https://doi.org/10.1007/BF00883107.
Alon, Uri. 2009. “How to Choose a Good Scientific Problem.” Molecular Cell 35 (6): 726–28. https://doi.org/10.1016/j.molcel.2009.09.013.
Arbesman, Samuel, and Nicholas A Christakis. 2011. “Eurekometrics: Analyzing the Nature of Discovery.” PLoS Comput Biol 7 (6): –1002072. https://doi.org/10.1371/journal.pcbi.1002072.
Arbilly, Michal, and Kevin N. Laland. 2017. “The Magnitude of Innovation and Its Evolution in Social Animals.” Proceedings of the Royal Society B: Biological Sciences 284 (1848). https://doi.org/10.1098/rspb.2016.2385.
Azoulay, Pierre, Christian Fons-Rosen, and Joshua S. Graff Zivin. 2015. “Does Science Advance One Funeral at a Time?” Working Paper 21788. National Bureau of Economic Research. https://doi.org/10.3386/w21788.
Baldwin, Melinda. 2018. “Scientific Autonomy, Public Accountability, and the Rise of ‘Peer Review’ in the Cold War United States.” Isis 109 (3): 538–58. https://doi.org/10.1086/700070.
Björk, Bo-Christer, and David Solomon. 2013. “The Publishing Delay in Scholarly Peer-Reviewed Journals.” Journal of Informetrics 7 (4): 914–23. https://doi.org/10.1016/j.joi.2013.09.001.
Bogich, Tiffany L, Sebastien Balleseteros, Robin Berjon, Chris Callahan, and Leon Chen. n.d. “On the Marginal Cost of Scholarly Communication.” https://research.science.ai/article/on-the-marginal-cost-of-scholarly-communication.
Charlin, Laurent, and Richard Zemel. 2013. “The Toronto Paper Matching System: An Automated Paper-Reviewer Assignment System,” May. https://openreview.net/forum?id=caynafZAnBafx.
Cole, S., Jr, and G. A. Simon. 1981. “Chance and Consensus in Peer Review.” Science 214 (4523): 881–86. https://doi.org/10.1126/science.7302566.
Coscia, Michele, and Luca Rossi. 2020. “Distortions of Political Bias in Crowdsourced Misinformation Flagging.” Journal of The Royal Society Interface 17 (167): 20200020. https://doi.org/10.1098/rsif.2020.0020.
Couzin-Frankel, Jennifer. 2015. “PubPeer Co-Founder Reveals Identity—and New Plans.” Science 349 (6252): 1036–36. https://doi.org/10.1126/science.349.6252.1036.
Gasparyan, Armen Yuri, Alexey N. Gerasimov, Alexander A. Voronov, and George D. Kitas. 2015. “Rewarding Peer Reviewers: Maintaining the Integrity of Science Communication.” Journal of Korean Medical Science 30 (4): 360–64. https://doi.org/10.3346/jkms.2015.30.4.360.
Gelman, Andrew. 2011. “Experimental Reasoning in Social Science.” In Field Experiments and Their Critics.
Gharbi, Musa al-. 2020. “Race and the Race for the White House: On Social Research in the Age of Trump.” Preprint. SocArXiv. https://doi.org/10.31235/osf.io/n8bkh.
“Go Forth and Replicate!” 2016. Nature News 536 (7617): 373. https://doi.org/10.1038/536373a.
Greenberg, Steven A. 2009. “How Citation Distortions Create Unfounded Authority: Analysis of a Citation Network.” BMJ 339 (July): b2680. https://doi.org/10.1136/bmj.b2680.
Hallsson, Bjørn G., and Klemens Kappel. 2020. “Disagreement and the Division of Epistemic Labor.” Synthese 197 (7): 2823–47. https://doi.org/10.1007/s11229-018-1788-6.
Heesen, Remco, and Liam Kofi Bright. n.d. “Is Peer Review a Good Idea?” The British Journal for the Philosophy of Science. Accessed July 26, 2020. https://doi.org/10.1093/bjps/axz029.
Hodges, James S. 2019. “Statistical Methods Research Done as Science Rather Than Mathematics.” arXiv:1905.08381 [stat], May. http://arxiv.org/abs/1905.08381.
Ioannidis, John P. 2005. “Why Most Published Research Findings Are False.” PLoS Medicine 2 (8): –124. https://doi.org/10.1371/journal.pmed.0020124.
Jan, Zeeshan. n.d. “Recognition and Reward System for Peer-Reviewers,” 9.
Jiménez, Ángel V., and Alex Mesoudi. 2019. “Prestige-Biased Social Learning: Current Evidence and Outstanding Questions.” Palgrave Communications 5 (1): 1–12. https://doi.org/10.1057/s41599-019-0228-7.
Kirman, Alan. 1992. “Whom or What Does the Representative Individual Represent?” The Journal of Economic Perspectives 6 (2): -117-136. http://www.jstor.org/stable/2138411.
———. 2010. “Learning in Agent Based Models.”
Krikorian, Gaëlle, and Amy Kapczynski. 2010. Access to knowledge in the age of intellectual property. New York; Cambridge, Mass.: Zone Books ; Distributed by the MIT Press. https://monoskop.org/images/e/e7/Krikorian_Kapczynski_eds_Access_to_Knowledge_in_the_Age_of_Intellectual_Property_2010.pdf.
Lakatos, Imre. 1980. The Methodology of Scientific Research Programmes: Volume 1 : Philosophical Papers. Cambridge University Press.
Laland, Kevin N. 2004. “Social Learning Strategies.” Animal Learning & Behavior 32 (1): 4–14. https://doi.org/10.3758/BF03196002.
Lindsey, D. 1988. “Assessing Precision in the Manuscript Review Process: A Little Better Than a Dice Roll.” Scientometrics 14 (1): 75–82. https://doi.org/10.1007/BF02020243.
McCook, Author Alison. 2017. “Meet PubPeer 2.0: New Version of Post-Publication Peer Review Site Launches Today.” Retraction Watch (blog). June 15, 2017. https://retractionwatch.com/2017/06/15/meet-pubpeer-2-0-new-version-post-publication-peer-review-site-launches-today/.
Medawar, Peter B. 1969. Induction and Intuition in Scientific Thought. American Philosophical Society Philadelphia.
———. 1982. Pluto’s Republic. Oxford University Press.
———. 1984. The Limits of Science. Harper & Row.
Merali, Zeeya. 2010. “Computational Science: Error.” Nature 467: 775–77. https://doi.org/10.1038/467775a.
Merrifield, Michael R, and Donald G Saari. 2009. “Telescope Time Without Tears: A Distributed Approach to Peer Review.” Astronomy & Geophysics 50 (4): 4.16–20. https://doi.org/10.1111/j.1468-4004.2009.50416.x.
“Nature Editors: All Hat and No Cattle.” 2016, December. https://blog.pubpeer.com/publications/AE11BE44CF3C40A558F3B453BF53C7#14.
Nguyen, C. Thi. 2020. “Cognitive Islands and Runaway Echo Chambers: Problems for Epistemic Dependence on Experts.” Synthese 197 (7): 2803–21. https://doi.org/10.1007/s11229-018-1692-0.
Post, Daniel J. van der, Mathias Franz, and Kevin N. Laland. 2016. “Skill Learning and the Evolution of Social Learning Mechanisms.” BMC Evolutionary Biology 16 (1): 166. https://doi.org/10.1186/s12862-016-0742-9.
“Post-Publication Criticism Is Crucial, but Should Be Constructive.” 2016. Nature News 540 (7631): 7. https://doi.org/10.1038/540007b.
Potts, Jason, John Hartley, Lucy Montgomery, Cameron Neylon, and Ellie Rennie. 2016. “A Journal Is a Club: A New Economic Model for Scholarly Publishing.” SSRN Scholarly Paper ID 2763975. Rochester, NY: Social Science Research Network. http://papers.ssrn.com/abstract=2763975.
Ragone, Azzurra, Katsiaryna Mirylenka, Fabio Casati, and Maurizio Marchese. 2013. “On Peer Review in Computer Science: Analysis of Its Effectiveness and Suggestions for Improvement.” Scientometrics 97 (2): 317–56. https://doi.org/10.1007/s11192-013-1002-z.
Rekdal, Ole Bjørn. 2014. “Academic Urban Legends.” Social Studies of Science 44 (4): 638–54. https://doi.org/10.1177/0306312714535679.
Ridley, J, N Kolm, R P Freckelton, and M J G Gage. 2007. “An Unexpected Influence of Widely Used Significance Thresholds on the Distribution of Reported P-Values.” Journal of Evolutionary Biology 20: 1082–89. https://doi.org/10.1111/j.1420-9101.2006.01291.x.
Ritchie, Stuart. 2020. Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth. First edition. New York: Metropolitan Books ; Henry Holt and Company.
Rzhetsky, Andrey, Jacob G. Foster, Ian T. Foster, and James A. Evans. 2015. “Choosing Experiments to Accelerate Collective Discovery.” Proceedings of the National Academy of Sciences 112 (47): 14569–74. https://doi.org/10.1073/pnas.1509757112.
Schimmer, Ralf, Geschuhn, Kai Karin, and Vogler, Andreas. 2015. “Disrupting the subscription journals’ business model for the necessary large-scale transformation to open access.” https://doi.org/10.17617/1.3.
Sekara, Vedran, Pierre Deville, Sebastian E. Ahnert, Albert-László Barabási, Roberta Sinatra, and Sune Lehmann. 2018. “The Chaperone Effect in Scientific Publishing.” Proceedings of the National Academy of Sciences 115 (50): 12603–7. https://doi.org/10.1073/pnas.1800471115.
Sen, Amartya K. 1977. “Rational Fools: A Critique of the Behavioral Foundations of Economic Theory.” Philosophy and Public Affairs 6: 317–44. https://doi.org/10.2307/2264946.
Shah, Nihar B, Behzad Tabibian, Krikamol Muandet, and Isabelle Guyon. 2016. “Design and Analysis of the NIPS 2016 Review Process,” 34.
Solomon, David J. 2007. “The Role of Peer Review for Scholarly Journals in the Information Age.” Journal of Electronic Publishing 10 (1). https://doi.org/10.3998/3336451.0010.107.
Spranzi, Marta. 2004. “Galileo and the Mountains of the Moon: Analogical Reasoning, Models and Metaphors in Scientific Discovery.” Journal of Cognition and Culture 4 (3): 451–83. https://doi.org/10.1163/1568537042484904.
Stove, David Charles. 1982. Popper and After: Four Modern Irrationalists. Pergamon.
Suppes, Patrick. 2002. Representation and Invariance of Scientific Structures. CSLI Publications.
Thagard, Paul. 1993. “Societies of Minds: Science as Distributed Computing.” Studies in History and Philosophy of Modern Physics 24: 49.
———. 1994. “Mind, Society, and the Growth of Knowledge.” Philosophy of Science 61.
———. 1997. “Collaborative Knowledge.” Noûs 31 (2): 242–61.
———. 2005. “How to Be a Successful Scientist.” Scientific and Technological Thinking, 159–71.
———. 2007. “Coherence, Truth, and the Development of Scientific Knowledge.” Philosophy of Science 74: 28–47. https://doi.org/10.1086/520941.
Thagard, Paul, and Abninder Litt. 2008. “Models of Scientific Explanation.” In The Cambridge Handbook of Computational Psychology. Cambridge: Cambridge University Press.
Thagard, Paul, and Jing Zhu. 2003. “Acupuncture, Incommensurability, and Conceptual Change.” Intentional Conceptual Change, 79–102.
Thurner, Stefan, and Rudolf Hanel. 2010. “Peer-Review in a World with Rational Scientists: Toward Selection of the Average.”
Van Noorden, Richard. 2013. “Open Access: The True Cost of Science Publishing.” Nature 495 (7442): 426–29. https://doi.org/10.1038/495426a.
Vazire, Simine. 2017. “Our Obsession with Eminence Warps Research.” Nature News 547 (7661): 7. https://doi.org/10.1038/547007a.
Whitehurst, Grover J. 1984. “Interrater Agreement for Journal Manuscript Reviews.” American Psychologist 39 (1): 22–28. https://doi.org/10.1037/0003-066X.39.1.22.
Wible, James R. 1998. Economics of Science. Routledge.
Xiao, Yuanzhang, Florian Dörfler, and Mihaela van der Schaar. 2014. “Incentive Design in Peer Review: Rating and Repeated Endogenous Matching.” arXiv:1411.2139 [cs], November. http://arxiv.org/abs/1411.2139.
Xu, Yichong, Han Zhao, and Xiaofei Shi. n.d. “Mechanism Design for Paper Review,” 9.
Yarkoni, Tal. 2019. “The Generalizability Crisis.” Preprint. PsyArXiv. https://doi.org/10.31234/osf.io/jqw35.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.