Crowd-sourced science



Mapping the world from smartphones and chat forums and reddit threads.

Open source science

Every Bug is Shallow if One of Your Readers is an Entomologist:

Research is difficult because reality is complex and many things are confusing or mysterious. But with enough eyeballs, all research bugs are shallow too.

Without a huge research budget and dozens of managers, you won’t be able to coordinate a ton of researchers. But the good news is, you didn’t really want to coordinate everyone anyways. You can just open the gates and let people get to work. It works fine for software!

Crowdsourced competitions

Another paradigm for crowdsourcing is to gamify science. That is not my jam; I feel that participants deserve a better stake than only twiddling their dopamine system and the incentives of recruiting that way are not ideal β€” scientists’ games will not be as fun as real game company’s highly evolved games, and the participants we can recruit for addictive behaviour are probably invested in a suboptimal manner for our goals.

Tools

Software frameworks for data collection: (Thanks Dan Pagendam for the tip.)

References

Arvan, Marcus, Liam Kofi Bright, and Remco Heesen. 2022. β€œJury Theorems for Peer Review.” The British Journal for the Philosophy of Science, January.
Charness, Gary, and Matthias Sutter. 2012. β€œGroups Make Better Self-Interested Decisions.” Journal of Economic Perspectives 26 (3): 157–76.
Ghose, Anindya, and Panagiotis G. Ipeirotis. 2011. β€œEstimating the Helpfulness and Economic Impact of Product Reviews: Mining Text and Reviewer Characteristics.” Knowledge and Data Engineering, IEEE Transactions on 23 (10): 1498–1512.
Jennett, Charlene, and Anna L. Cox. 2014. β€œEight Guidelines for Designing Virtual Citizen Science Projects.” In Second AAAI Conference on Human Computation and Crowdsourcing.
Kanhere, Salil S. 2013. β€œParticipatory Sensing: Crowdsourcing Data from Mobile Smartphones in Urban Spaces.” In Distributed Computing and Internet Technology, edited by Chittaranjan Hota and Pradip K. Srimani, 19–26. Lecture Notes in Computer Science 7753. Springer Berlin Heidelberg.
King, Gary, Jennifer Pan, and Margaret E. Roberts. 2014. β€œReverse-Engineering Censorship in China: Randomized Experimentation and Participant Observation.” Science 345 (6199): 1251722.
Liu, Yuan, and Chunyan Miao. n.d. β€œA Survey of Incentives and Mechanism Design for Human Computation Systems,” 21.
Meslec, Nicoleta, Petru Curseu, Marius Meeus, and Oana Fodor. 2014. β€œWhen None of Us Perform Better Than All of Us Together: The Role of Analogical Decision Rules in Groups.” arXiv:1406.7562 [Physics], June.
Naroditskiy, V., N. R. Jennings, P. Van Hentenryck, and M. Cebrian. 2014. β€œCrowdsourcing Contest Dilemma.” Journal of The Royal Society Interface 11 (99): 20140532–32.
Oishi, Koji, Manuel Cebrian, Andres Abeliuk, and Naoki Masuda. 2014. β€œIterated Crowdsourcing Dilemma Game.” Scientific Reports 4 (February).
Ranard, Benjamin L., Yoonhee P. Ha, Zachary F. Meisel, David A. Asch, Shawndra S. Hill, Lance B. Becker, Anne K. Seymour, and Raina M. Merchant. 2014. β€œCrowdsourcing β€” harnessing the masses to advance health and medicine, a systematic review.” Journal of General Internal Medicine 29 (1): 187–203.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.