On heuristic mechanism and institutional design for communities of scientific practice for the common property resource that is human knowledge. Sociology of science, in other words. How do diverse underfunded teams manage to advance truth with their weird prestige economy despite the many pitfalls of publication filters and such? What is effective in designing communities, practice and social norms? Both of scientific insiders and outsiders? How much communication is too much? How much iconoclasm is right to defeat groupthink and foster the spread of good ideas? At an individual level we might wonder about soft methodology.
A place to file questions like this, in other words (O’Connor and Wu 2021):
Diversity of practice is widely recognized as crucial to scientific progress. If all scientists perform the same tests in their research, they might miss important insights that other tests would yield. If all scientists adhere to the same theories, they might fail to explore other options which, in turn, might be superior. But the mechanisms that lead to this sort of diversity can also generate epistemic harms when scientific communities fail to reach swift consensus on successful theories. In this paper, we draw on extant literature using network models to investigate diversity in science. We evaluate different mechanisms from the modeling literature that can promote transient diversity of practice, keeping in mind ethical and practical constraints posed by real epistemic communities. We ask: what are the best ways to promote the right amount of diversity of practice in such communities?
Mechanism design for science
incoming
- Open call: Consequences of the Scientific Reform Movement · Journal of Trial & Error
- Scott And Scurvy (Idle Words)
- Escaping science’s paradox - Works in Progress discusses red-teaming science and incentivising productive failures
- Progress Studies: A Discipline is a Set of Institutional Norms (connection to innovation)
Hanania on Tetlock and the Taliban makes a point about the illusory nature of some expertise.
[Tetlock’s results] show that “expertise” as we understand it is largely fake. Should you listen to epidemiologists or economists when it comes to COVID-19? Conventional wisdom says “trust the experts.” The lesson of Tetlock (and the Afghanistan War), is that while you certainly shouldn’t be getting all your information from your uncle’s Facebook Wall, there is no reason to start with a strong prior that people with medical degrees know more than any intelligent person who honestly looks at the available data.
He has some clever examples about science community in there. Then he draws a longer bow and makes some IMO less considered swipes at straw-man diversity which somewhat ruins the effect for me. Zeynep Tufekci gets at the actual problem that I think both people who talk about contrarianism and diversity would like to get at: Do the incentives, and especially the incentives in social structures, actually encourage the researchers towards truths, or towards collective fictions?
Sometimes, going against consensus is conflated with contrarianism. Contrarianism is juvenile, and misleads people. It’s not a good habit.
The opposite of contrarianism isn’t accepting elite consensus or being gullible.
Groupthink, especially when big interests are involved, is common. The job is to resist groupthink with facts, logic, work and a sense of duty to the public. History rewards that, not contrarianism.
To get the right lessons from why we fail—be it masks or airborne transmission or failing to regulate tech when we could or Iraq war—it’s key to study how that groupthink occurred. It’s a sociological process: vested interests arguing themselves into positions that benefit them.
Scott Alexander, Contrarians, Crackpots, and Consensus tries to crack this one open with an ontology.
I think a lot of things are getting obscured by the term “scientific establishment” or “scientific consensus”. Imagine a pyramid with the following levels from top to bottom:
FIRST, specialist researchers in a field…
SECOND, non-specialist researchers in a broader field…
THIRD, the organs and administrators of a field who help set guidelines…
FOURTH, science journalism, meaning everyone from the science reporters at the New York Times to the guys writing books with titles like The Antidepressant Wars to random bloggers…
ALSO FOURTH IN A DIFFERENT COLUMN OF THE PYRAMID BECAUSE THIS IS A HYBRID GREEK PYRAMID THAT HAS COLUMNS, “fieldworkers”, aka the professionals we charge with putting the research into practice. … FIFTH, the general public.
A lot of these issues make a lot more sense in terms of different theories going on at the same time on different levels of the pyramid. I get the impression that in the 1990s, the specialist researchers, the non-specialist researchers, and the organs and administrators were all pretty responsible about saying that the serotonin theory was just a theory and only represented one facet of the multifaceted disease of depression. Science journalists and prescribing psychiatrists were less responsible about this, and so the general public may well have ended up with an inaccurate picture.
No comments yet. Why not leave one?