Scientific institutions and mechanisms

Descriptive and normative

On heuristic mechanism and institutional design for communities of scientific practice for the common property resource that is human knowledge. Sociology of science, in other words. How do diverse underfunded teams manage to advance truth with their weird prestige economy despite the many pitfalls of publication filters and such? What is effective in designing communities, practice and social norms? Both of scientific insiders and outsiders? How much communication is too much? How much iconoclasm is right to defeat groupthink and foster the spread of good ideas? At an individual level we might wonder about soft methodology.

A place to file questions like this, in other words (O’Connor and Wu 2021):

Diversity of practice is widely recognized as crucial to scientific progress. If all scientists perform the same tests in their research, they might miss important insights that other tests would yield. If all scientists adhere to the same theories, they might fail to explore other options which, in turn, might be superior. But the mechanisms that lead to this sort of diversity can also generate epistemic harms when scientific communities fail to reach swift consensus on successful theories. In this paper, we draw on extant literature using network models to investigate diversity in science. We evaluate different mechanisms from the modeling literature that can promote transient diversity of practice, keeping in mind ethical and practical constraints posed by real epistemic communities. We ask: what are the best ways to promote the right amount of diversity of practice in such communities?

Mechanism design for science


Smaldino and O’Connor (2020):

Why do bad methods persist in some academic disciplines, even when they have been clearly rejected in others? What factors allow good methodological advances to spread across disciplines? In this paper, we investigate some key features determining the success and failure of methodological spread between the sciences. We introduce a formal model that considers factors like methodological competence and reviewer bias towards one’s own methods. We show how self-preferential biases can protect poor methodology within scientific communities, and lack of reviewer competence can contribute to failures to adopt better methods. We then use a second model to further argue that input from outside disciplines, especially in the form of peer review and other credit assignment mechanisms, can help break down barriers to methodological improvement. This work therefore presents an underappreciated benefit of interdisciplinarity.

On the origin of psychological research practices, with special regard to self-reported nostril width:

when does a certain practice–e.g., a study design, a way to collect data, a particular statistical approach–”succeed” and start to dominate journals?

It must be capable of surviving a multi-stage selection procedure:

  1. Implementation must be sufficiently affordable so that researchers can actually give it a shot
  2. Once the authors have added it to a manuscript, it must be retained until submission
  3. The resulting manuscript must enter the peer-review process and survive it (without the implementation of the practice getting dropped on the way)
  4. The resulting publication needs to attract enough attention post-publication so that readers will feel inspired to implement it themselves, fueling the eternally turning wheel of Samsara publication-oriented science

Hanania on Tetlock and the Taliban makes a point about the illusory nature of some expertise.

[Tetlock’s results] show that “expertise” as we understand it is largely fake. Should you listen to epidemiologists or economists when it comes to COVID-19? Conventional wisdom says “trust the experts.” The lesson of Tetlock (and the Afghanistan War), is that while you certainly shouldn’t be getting all your information from your uncle’s Facebook Wall, there is no reason to start with a strong prior that people with medical degrees know more than any intelligent person who honestly looks at the available data.

His examples about science community are neat. Then he draws a longer bow and makes some IMO less considered swipes at straw-man diversity which somewhat ruins the effect for me. Zeynep Tufekci gets at the actual problem that I think both people who talk about contrarianism and diversity would like to get at: Do the incentives, and especially the incentives in social structures, actually encourage the researchers towards truths, or towards collective fictions?

Sometimes, going against consensus is conflated with contrarianism. Contrarianism is juvenile, and misleads people. It’s not a good habit.

The opposite of contrarianism isn’t accepting elite consensus or being gullible.

Groupthink, especially when big interests are involved, is common. The job is to resist groupthink with facts, logic, work and a sense of duty to the public. History rewards that, not contrarianism.

To get the right lessons from why we fail—be it masks or airborne transmission or failing to regulate tech when we could or Iraq war—it’s key to study how that groupthink occurred. It’s a sociological process: vested interests arguing themselves into positions that benefit them.

Scott Alexander, Contrarians, Crackpots, and Consensus tries to crack this one open with an ontology.

I think a lot of things are getting obscured by the term “scientific establishment” or “scientific consensus”. Imagine a pyramid with the following levels from top to bottom:

FIRST, specialist researchers in a field…

SECOND, non-specialist researchers in a broader field…

THIRD, the organs and administrators of a field who help set guidelines…

FOURTH, science journalism, meaning everyone from the science reporters at the New York Times to the guys writing books with titles like The Antidepressant Wars to random bloggers…

ALSO FOURTH IN A DIFFERENT COLUMN OF THE PYRAMID BECAUSE THIS IS A HYBRID GREEK PYRAMID THAT HAS COLUMNS, “fieldworkers”, aka the professionals we charge with putting the research into practice. … FIFTH, the general public.

A lot of these issues make a lot more sense in terms of different theories going on at the same time on different levels of the pyramid. I get the impression that in the 1990s, the specialist researchers, the non-specialist researchers, and the organs and administrators were all pretty responsible about saying that the serotonin theory was just a theory and only represented one facet of the multifaceted disease of depression. Science journalists and prescribing psychiatrists were less responsible about this, and so the general public may well have ended up with an inaccurate picture.

Bright (2023):

Du Bois took quite the opposite route from trying to introduce lotteries, with their embrace of chance randomization. In fact, to a very considerable degree he centrally planned the sort of research his group would carry out so as to form an interlinking whole. Where the status quo system allows for competition between scientists to give funding out piecemeal to whoever seems best at a given moment, Du Bois’ work embodies the attitude that as far as possible our research activities should be coordinated, and not aimed at rewarding individual greatness but rather producing the best overall project. While ideas along these lines have not been totally without support in the history of philosophy of science (see e.g. Neurath 1946, Bernal 1949, Kummerfeld & Zollman 2015), it is safe to say the epistemic merits of this are relatively under-explored. Our brief examination of Du Bois’ plan will thus hopefully form a spur to generate more consideration of this sort of holistic line of action



Agassi, Joseph. 1974. The Logic of Scientific Inquiry.” Synthese 26: 498–514.
Alon, Uri. 2009. How to Choose a Good Scientific Problem.” Molecular Cell 35 (6): 726–28.
Arbesman, Samuel, and Nicholas A Christakis. 2011. Eurekometrics: Analyzing the Nature of Discovery.” PLoS Comput Biol 7 (6): –1002072.
Arvan, Marcus, Liam Kofi Bright, and Remco Heesen. 2022. Jury Theorems for Peer Review.” The British Journal for the Philosophy of Science, January.
Azoulay, Pierre, Christian Fons-Rosen, and Joshua S. Graff Zivin. 2015. Does Science Advance One Funeral at a Time? Working Paper 21788. National Bureau of Economic Research.
Bazzoli, Andrea. 2022. Open Science and Epistemic Pluralism: A Tale of Many Perils and Some Opportunities.” Industrial and Organizational Psychology 15 (4): 525–28.
Bhattacharya, Jay, and Mikko Packalen. 2020. Stagnation and Scientific Incentives.” Working Paper 26752. National Bureau of Economic Research.
Board, Simon, and Moritz Meyer-ter-Vehn. 2021. Learning Dynamics in Social Networks.” Econometrica 89 (6): 2601–35.
Bright, liam kofi. 2023. Du Bois on the Centralised Organisation of Science.” In Pluralising Philosophy’s Past, edited by Marius Backmann and Amber Griffioen.
Campante, Filipe, Ruben Durante, and Andrea Tesei. 2022. Media and Social Capital.” Annual Review of Economics 14 (1): 69–91.
Chu, Johan S. G., and James A. Evans. 2021. Slowed Canonical Progress in Large Fields of Science.” Proceedings of the National Academy of Sciences 118 (41): e2021636118.
Dang, Haixin, and Liam Kofi Bright. 2021. Scientific Conclusions Need Not Be Accurate, Justified, or Believed by Their Authors.” Synthese 199 (3-4): 8187–8203.
Devezer, Berna, Luis G. Nardin, Bert Baumgaertner, and Erkan Ozge Buzbas. 2019. Scientific Discovery in a Model-Centric Framework: Reproducibility, Innovation, and Epistemic Diversity.” PLOS ONE 14 (5): e0216125.
Dubova, Marina, Arseny Moskvichev, and Kevin Zollman. 2022. Against Theory-Motivated Experimentation in Science.” MetaArXiv.
Farrow, Robert, and Rolin Moe. 2019. Rethinking the Role of the Academy: Cognitive Authority in the Age of Post-Truth.” Teaching in Higher Education 24 (3): 272–87.
Galesic, Mirta, Daniel Barkoczi, Andrew Berdahl, Dora Biro, Giuseppe Carbone, Ilaria Giannoccaro, Robert Goldstone, et al. 2022. Beyond Collective Intelligence: Collective Adaptation.” SocArXiv.
Gasparyan, Armen Yuri, Alexey N. Gerasimov, Alexander A. Voronov, and George D. Kitas. 2015. Rewarding Peer Reviewers: Maintaining the Integrity of Science Communication.” Journal of Korean Medical Science 30 (4): 360–64.
Greenberg, Steven A. 2009. How Citation Distortions Create Unfounded Authority: Analysis of a Citation Network.” BMJ 339 (July): b2680.
Healy, Kieran. 2015. The Performativity of Networks.” European Journal of Sociology 56 (02): 175–205.
Heesen, Remco, and Liam Kofi Bright. 2021. Is Peer Review a Good Idea? The British Journal for the Philosophy of Science 72 (3): 635–63.
Hertz, Uri, Margaux Romand-Monnier, Konstantina Kyriakopoulou, and Bahador Bahrami. 2016. Social influence protects collective decision making from equality bias.” Journal of Experimental Psychology. Human Perception and Performance 42 (2): 164–72.
Hirsch, Fred. 2013. Social Limits to Growth.” In Social Limits to Growth. Harvard University Press.
Hoelzemann, Johannes, and Nicolas Klein. 2021. Bandits in the Lab.” Quantitative Economics 12 (3): 1021–51.
Ioannidis, John P. 2005. Why Most Published Research Findings Are False. PLoS Medicine 2 (8): –124.
Jan, Zeeshan. n.d. “Recognition and Reward System for Peer-Reviewers,” 9.
Kearns, Hugh, and Maria Gardiner. 2011. The Care and Maintenance of Your Adviser.” Nature 469 (7331): 570–70.
Lakatos, Imre. 1980. The Methodology of Scientific Research Programmes: Volume 1 : Philosophical Papers. Cambridge University Press.
McElreath, Richard, and Robert Boyd. 2007. Mathematical Models of Social Evolution: A Guide for the Perplexed. University Of Chicago Press.
McElreath, Richard, and Paul E. Smaldino. 2015. Replication, Communication, and the Population Dynamics of Scientific Discovery.” arXiv:1503.02780 [Stat], March.
Merrifield, Michael R, and Donald G Saari. 2009. Telescope Time Without Tears: A Distributed Approach to Peer Review.” Astronomy & Geophysics 50 (4): 4.16–20.
Merton, Robert K. 1968. The Matthew Effect in Science.” Science 159 (3810): 56–63.
———. 1988. The Matthew Effect in Science, II: Cumulative Advantage and the Symbolism of Intellectual Property.” Isis 79 (4): 606–23.
Nissen, Silas B., Tali Magidson, Kevin Gross, and Carl T. Bergstrom. 2016. Publication Bias and the Canonization of False Facts.” arXiv:1609.00494 [Physics, Stat], September.
O’Connor, Cailin. 2017. Evolving to Generalize: Trading Precision for Speed.” British Journal for the Philosophy of Science 68 (2).
O’Connor, Cailin, and Justin Bruner. 2019. Dynamics and Diversity in Epistemic Communities.” Erkenntnis 84 (1): 101–19.
O’Connor, Cailin, and James Owen Weatherall. 2017. Scientific Polarization.” European Journal for Philosophy of Science 8 (3): 855–75.
———. 2019. The Misinformation Age: How False Beliefs Spread. 1 edition. New Haven: Yale University Press.
O’Connor, Cailin, and Jingyi Wu. 2021. How Should We Promote Transient Diversity in Science? MetaArXiv.
Osborne, Jonathan. 2022. Science Education in an Age of Misinformation.”
Rekdal, Ole Bjørn. 2014. Academic Urban Legends.” Social Studies of Science 44 (4): 638–54.
Robbins, Lionel. 1932. An Essay on the Nature and Significance of Economic Science. Macmillan.
Ross, Matthew B., Britta M. Glennon, Raviv Murciano-Goroff, Enrico G. Berkes, Bruce A. Weinberg, and Julia I. Lane. 2022. Women Are Credited Less in Science Than Men.” Nature, June, 1–11.
Rubin, Hannah, and Cailin O’Connor. 2018. Discrimination and Collaboration in Science.” Philosophy of Science 85 (3): 380–402.
Rzhetsky, Andrey, Jacob G. Foster, Ian T. Foster, and James A. Evans. 2015. Choosing Experiments to Accelerate Collective Discovery.” Proceedings of the National Academy of Sciences 112 (47): 14569–74.
Smaldino, Paul E., and Cailin O’Connor. 2020. Interdisciplinarity Can Aid the Spread of Better Methods Between Scientific Communities.” MetaArXiv.
Smith, Lones, Peter Norman Sørensen, and Jianrong Tian. 2021. Informational Herding, Optimal Experimentation, and Contrarianism.” The Review of Economic Studies 88 (5): 2527–54.
Spranzi, Marta. 2004. Galileo and the Mountains of the Moon: Analogical Reasoning, Models and Metaphors in Scientific Discovery.” Journal of Cognition and Culture 4 (3): 451–83.
Stove, David Charles. 1982. Popper and After: Four Modern Irrationalists. Pergamon.
Suppes, Patrick. 2002. Representation and Invariance of Scientific Structures. CSLI Publications.
Thagard, Paul. 1993. “Societies of Minds: Science as Distributed Computing.” Studies in History and Philosophy of Modern Physics 24: 49.
———. 1994. “Mind, Society, and the Growth of Knowledge.” Philosophy of Science 61.
———. 1997. “Collaborative Knowledge.” Noûs 31 (2): 242–61.
———. 2005. “How to Be a Successful Scientist.” Scientific and Technological Thinking, 159–71.
———. 2007. Coherence, Truth, and the Development of Scientific Knowledge.” Philosophy of Science 74: 28–47.
Thagard, Paul, and Abninder Litt. 2008. “Models of Scientific Explanation.” In The Cambridge Handbook of Computational Psychology. Cambridge: Cambridge University Press.
Thagard, Paul, and Jing Zhu. 2003. “Acupuncture, Incommensurability, and Conceptual Change.” Intentional Conceptual Change, 79–102.
The Importance of Frontier Knowledge for the Generation of Ideas.” 2018. CEPR.
Thurner, Stefan, and Rudolf Hanel. 2010. “Peer-Review in a World with Rational Scientists: Toward Selection of the Average.”
Valente, Thomas W, and Everett M. Rogers. 1995. The Origins and Development of the Diffusion of Innovations Paradigm as an Example of Scientific Growth.” Science Communication 16 (3): 242–73.
Vazire, Simine. 2017. Our Obsession with Eminence Warps Research.” Nature News 547 (7661): 7.
Wagenmakers, Eric-Jan, Alexandra Sarafoglou, and Balazs Aczel. 2022. One Statistical Analysis Must Not Rule Them All.” Nature 605 (7910): 423–25.
Weisbuch, Gérard, Guillaume Deffuant, Frédéric Amblard, and Jean-Pierre Nadal. 2002. Meet, Discuss, and Segregate! Complexity 7 (3): 55–63.
Weng, L, A Flammini, A Vespignani, F Menczer, L Weng, A Flammini, A Vespignani, and F Menczer. 2012. Competition Among Memes in a World with Limited Attention.” Scientific Reports 2.
Wible, James R. 1998. Economics of Science. Routledge.
Williams, Daniel. 2022. The Marketplace of Rationalizations.” Economics & Philosophy, March, 1–25.
Woodley, Lou, and Katie Pratt. 2020. The CSCCE Community Participation Model – A Framework to Describe Member Engagement and Information Flow in STEM Communities,” August.
Wu, Jingyi, Cailin O’Connor, and Paul E. Smaldino. 2022. The Cultural Evolution of Science.” MetaArXiv.
Yarkoni, Tal. 2019. The Generalizability Crisis.” Preprint. PsyArXiv.
Zimmer, Carl. 2020. How You Should Read Coronavirus Studies, or Any Science Paper.” The New York Times, June 1, 2020, sec. Science.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.