Draft

Adversarial classification

July 24, 2017 — April 22, 2023

classification
classification and society
collective knowledge
confidentiality
culture
ethics
game theory
how do science
incentive mechanisms
sociology
statistics
wonk

Content warning:

Discussion of hot-button contentious issues, such as gender identities, and Israel-Palestine affairs, upon which I conspicuously avoid taking a position, while analysing the semantics of public debate about these issues. This will risk being considered favouring a side. But also, since I am talking about weaponisation of meaning, I do not see an option other than considering contentious issues where meaning is weaponised, which is kind of the point.

Figure 1

1 Case study: A Chair

Figure 2
Figure 3: A chair
Figure 4: A chair

2 When categories have value

Case study on gender

Figure 6
Figure 7: Chick the Cherub (Baum and Neill 1906), a non-binary children’s book character from 1906.
Figure 8
Figure 9
Figure 10

3 When categories are teams

Israel, Palestine

Likud, Hamas, Israelis, Palestinians, Islamophobia, Antisemitism, genocide.

Misunderstanding antisemitism in America

4 Arguing the boundaries of categories

TODO: likelihood principle, compressions, \(\mathfrak{M}\)-open…

4.1 Decision theory

Motte and Bailey. P-hack thyself.

5 Recommender systems and collective culture

See recommender dynamics.

6 Incoming

To read: Sen and Wasow (2016).

Jon Stokes, Google’s Colosseum

This map is contentious precisely because of its role in our red vs. blue power struggle, as a way of elevating some voices and silencing others. As such, it’s a remarkable example of the main point I’m trying to make in this post: the act of extracting a limited feature set from a natural paradigm, and then representing those higher-value features in a cultural product of some kind, is always about power on some level.

See also Affirming the Consequent and Tribal thermodynamics.

Figure 11

Henry Farrell and Marion Fourcade, The Moral Economy of High-Tech Modernism

While people in and around the tech industry debate whether algorithms are political at all, social scientists take the politics as a given, asking instead how this politics unfolds: how algorithms concretely govern. What we call “high-tech modernism”—the application of machine learning algorithms to organize our social, economic, and political life—has a dual logic. On the one hand, like traditional bureaucracy, it is an engine of classification, even if it categorizes people and things very differently. On the other, like the market, it provides a means of self-adjusting allocation, though its feedback loops work differently from the price system. Perhaps the most important consequence of high-tech modernism for the contemporary moral political economy is how it weaves hierarchy and data-gathering into the warp and woof of everyday life, replacing visible feedback loops with invisible ones, and suggesting that highly mediated outcomes are in fact the unmediated expression of people’s own true wishes.

7 References

Barocas, and Selbst. 2016. Big Data’s Disparate Impact.” SSRN Scholarly Paper ID 2477899.
Baum, and Neill. 1906. John Dough and the cherub.
Burrell. 2016. How the Machine ’Thinks’: Understanding Opacity in Machine Learning Algorithms.” Big Data & Society.
Che, Zhang, Sohl-Dickstein, et al. 2020. Your GAN Is Secretly an Energy-Based Model and You Should Use Discriminator Driven Latent Sampling.” arXiv:2003.06060 [Cs, Stat].
Dean, and Morgenstern. 2022. Preference Dynamics Under Personalized Recommendations.”
Dressel, and Farid. 2018. The Accuracy, Fairness, and Limits of Predicting Recidivism.” Science Advances.
Dutta, Wei, Yueksel, et al. 2020. Is There a Trade-Off Between Fairness and Accuracy? A Perspective Using Mismatched Hypothesis Testing.” In Proceedings of the 37th International Conference on Machine Learning.
Dwork, Hardt, Pitassi, et al. 2012. Fairness Through Awareness.” In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference. ITCS ’12.
Farrell, and Fourcade. 2023. The Moral Economy of High-Tech Modernism.” Daedalus.
Feldman, Friedler, Moeller, et al. 2015. Certifying and Removing Disparate Impact.” In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. KDD ’15.
Gozli. 2023. Principles of Categorization: A Synthesis.” Seeds of Science.
Ho, Kastner, and Wong. 1978. Teams, Signaling, and Information Theory.” IEEE Transactions on Automatic Control.
Kleinberg, and Raghavan. 2021. Algorithmic Monoculture and Social Welfare.” Proceedings of the National Academy of Sciences.
Laufer. 2020. Compounding Injustice: History and Prediction in Carceral Decision-Making.” arXiv:2005.13404 [Cs, Stat].
Lee, and Skrentny. 2010. Race Categorization and the Regulation of Business and Science.” Law & Society Review.
Leqi, Hadfield-Menell, and Lipton. 2021. When Curation Becomes Creation: Algorithms, Microcontent, and the Vanishing Distinction Between Platforms and Creators.” Queue.
Menon, and Williamson. 2018. The Cost of Fairness in Binary Classification.” In Proceedings of the 1st Conference on Fairness, Accountability and Transparency.
O’Neil. 2017. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.
Pleiss, Raghavan, Wu, et al. 2017. On Fairness and Calibration.” In Advances In Neural Information Processing Systems.
Raghavan. 2021. The Societal Impacts of Algorithmic Decision-Making.”
Saperstein, Penner, and Light. 2013. Racial Formation in Perspective: Connecting Individuals, Institutions, and Power Relations.” Annual Review of Sociology.
Sen, and Wasow. 2016. Race as a Bundle of Sticks: Designs That Estimate Effects of Seemingly Immutable Characteristics.” Annual Review of Political Science.
Stray, Halevy, Assar, et al. 2022. Building Human Values into Recommender Systems: An Interdisciplinary Synthesis.”
Venkatasubramanian, Scheidegger, Friedler, et al. 2021. Fairness in Networks: Social Capital, Information Access, and Interventions.” In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. KDD ’21.
Verma, and Rubin. 2018. Fairness Definitions Explained.” In Proceedings of the International Workshop on Software Fairness. FairWare ’18.
Wu, and Zhang. 2016. Automated Inference on Criminality Using Face Images.” arXiv:1611.04135 [Cs].
Xu, and Dean. 2023. Decision-Aid or Controller? Steering Human Decision Makers with Algorithms.”