Model fairness


One of history’s more notorious adventures in classifiers; Francois de Halleux at the Apartheid Museum

Which utilitarian ethical criteria does my model satisfy?

Consider the cautionary tale Automated Inference on Criminality using Face Images (Wu and Zhang 2016)

[…] we find some discriminating structural features for predicting criminality, such as lip curvature, eye inner corner distance, and the so-called nose-mouth angle. Above all, the most important discovery of this research is that criminal and non-criminal face images populate two quite distinctive manifolds. The variation among criminal faces is significantly greater than that of the non-criminal faces. The two manifolds consisting of criminal and non-criminal faces appear to be concentric, with the non-criminal manifold lying in the kernel with a smaller span, exhibiting a law of normality for faces of non-criminals. In other words, the faces of general law-biding public have a greater degree of resemblance compared with the faces of criminals, or criminals have a higher degree of dissimilarity in facial appearance than normal people.

There are so many problems with this. Which ones would you be happy with your local law enforcement authority taking home from this?

Maybe the in-progress textbook will have something to say? Solon Barocas, Moritz Hardt, Arvind Narayanan Fairness and machine learning.

Or maybe i want to do a post hoc analysis on whether my model was in fact using fair criteria when it made a decision. This is model interpretation.

Think pieces on fairness in models in practice

Fairness and causal reasoning

Here’s a thing that was so simple and necessary I assumed it had already been done long before it was. (Kilbertus et al. 2017)

Recent work on fairness in machine learning has focused on various statistical discrimination criteria and how they trade off. Most of these criteria are observational: They depend only on the joint distribution of predictor, protected attribute, features, and outcome. While convenient to work with, observational criteria have severe inherent limitations that prevent them from resolving matters of fairness conclusively.

Going beyond observational criteria, we frame the problem of discrimination based on protected attributes in the language of causal reasoning. This viewpoint shifts attention from “What is the right fairness criterion?” to “What do we want to assume about the causal data generating process?” Through the lens of causality, we make several contributions. First, we crisply articulate why and when observational criteria fail, thus formalizing what was before a matter of opinion. Second, our approach exposes previously ignored subtleties and why they are fundamental to the problem. Finally, we put forward natural causal non-discrimination criteria and develop algorithms that satisfy them.

Fairness trade-offs

There are certain impossibility theorems around what you can do here. In a certain sense the only fair model is no model at all. Who should our automated model extend a loan to? everyone! no-one! All other decision rules impinge upon the inpenetrable thicket of cause and effect and historical after-effects that characterise human moral calculus. No rule can, satisfy every ethical notion of fairness. Also practically it is impossible to give absolute certainty that unfair criteria are not leaking in to any given non-trivial model. However, that doesn’t mean you can’t fall short of the impossibility frontier on the side of unfairness (or indeed pointless inefficiency) if you don’t work at it.

Chris Tucchio, at crunch conf makes some points about marginalist allocative/procedural fairness and net utility versus group rights.

If we choose to service Hyderabad with no disparities, we’ll run out of money and stop serving Hyderabad. The other NBFCs won’t.

Net result: Hyderabad is redlined by competitors and still gets no service.

Our choice: Keep the fraudsters out, utilitarianism over group rights.

He does a good job of explaining some impossibility theorems via examples, esp (Kleinberg, Mullainathan, and Raghavan 2016). Note the interesting intersection of two types of classifications implicit in his model — uniformly reject, versus biased accept/reject, subject to capital constraints. I need to revisit that and think some more.

Han Zhao is an actual researcher in this area. Inherent Tradeoffs in Learning Fair Representations, including two of their own results Zhao et al. (2019); Zhao and Gordon (2019).

Han Zhao on statistical parity

Beauty contest problems in credit

🏗 think about fairness problems that arise when the model is supposed to be rewarded on the basis of being a good bet for the future. Models that are supposed to predict credit risk have a feedback/reinforcing dimension - people in a poverty trap are bad credit risks, even if they got into the poverty trap because of lack of credit, and despite the fact that if they were not in a poverty trap they might not be bad credit risks. Of course, also people who have a raging meth addiction and will spend all the loans on drugs are in the trap. A beauty contest problem is a model for this kind of situation, although there is a time-dimension also. There is presumable a game-theory equilibrium problem here. One imagines the Chinese restaurant process or something like it popping up, perhaps even the classic Pareto distribution or other Matthew Effect models.

Aggarwal, Charu C., and Philip S. Yu. 2008. “A General Survey of Privacy-Preserving Data Mining Models and Algorithms.” In Privacy-Preserving Data Mining, edited by Charu C. Aggarwal and Philip S. Yu, 11–52. Advances in Database Systems 34. Springer US. https://doi.org/10.1007/978-0-387-70992-5_2.

Barocas, Solon, and Andrew D. Selbst. 2016. “Big Data’s Disparate Impact.” SSRN Scholarly Paper ID 2477899. Rochester, NY: Social Science Research Network. https://papers.ssrn.com/abstract=2477899.

Burrell, Jenna. 2016. “How the Machine ’Thinks’: Understanding Opacity in Machine Learning Algorithms.” Big Data & Society 3 (1): 2053951715622512. https://doi.org/10.1177/2053951715622512.

Dwork, Cynthia, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. “Fairness Through Awareness.” In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, 214–26. ITCS ’12. New York, NY, USA: ACM. https://doi.org/10.1145/2090236.2090255.

Feldman, Michael, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. “Certifying and Removing Disparate Impact.” In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 259–68. KDD ’15. New York, NY, USA: ACM. https://doi.org/10.1145/2783258.2783311.

Hardt, Moritz, Eric Price, and Nati Srebro. 2016. “Equality of Opportunity in Supervised Learning.” In Advances in Neural Information Processing Systems, 3315–23. http://papers.nips.cc/paper/6373-equality-of-opportunity-in-supervised-learning.

Hidalgo, César A., Diana Orghian, Jordi Albo Canals, Filipa de Almeida, and Natalia Martín Cantero. 2021. How Humans Judge Machines. Cambridge, Massachusetts: The MIT Press.

Kilbertus, Niki, Mateo Rojas Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf. 2017. “Avoiding Discrimination Through Causal Reasoning.” In Advances in Neural Information Processing Systems 30, edited by I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, 656–66. Curran Associates, Inc. http://papers.nips.cc/paper/6668-avoiding-discrimination-through-causal-reasoning.pdf.

Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. 2016. “Inherent Trade-Offs in the Fair Determination of Risk Scores,” September. https://arxiv.org/abs/1609.05807v1.

Miconi, Thomas. 2017. “The Impossibility of "Fairness": A Generalized Impossibility Result for Decisions,” July. https://arxiv.org/abs/1707.01195.

O’Neil, Cathy. 2017. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Reprint edition. New York: Broadway Books.

Pleiss, Geoff, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q. Weinberger. 2017. “On Fairness and Calibration.” In Advances in Neural Information Processing Systems. http://arxiv.org/abs/1709.02012.

Sweeney, Latanya. 2013. “Discrimination in Online Ad Delivery.” Queue 11 (3): 10:10–10:29. https://doi.org/10.1145/2460276.2460278.

Wisdom, Scott, Thomas Powers, James Pitton, and Les Atlas. 2016. “Interpretable Recurrent Neural Networks Using Sequential Sparse Recovery.” In Advances in Neural Information Processing Systems 29. http://arxiv.org/abs/1611.07252.

Wu, Xiaolin, and Xi Zhang. 2016. “Automated Inference on Criminality Using Face Images,” November. http://arxiv.org/abs/1611.04135.

Zemel, Rich, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. 2013. “Learning Fair Representations.” In Proceedings of the 30th International Conference on Machine Learning (ICML-13), 325–33. http://machinelearning.wustl.edu/mlpapers/papers/icml2013_zemel13.

Zhao, Han, Amanda Coston, Tameem Adel, and Geoffrey J. Gordon. 2019. “Conditional Learning of Fair Representations.” In. https://openreview.net/forum?id=Hkekl0NFPr.

Zhao, Han, and Geoffrey J. Gordon. 2019. “Inherent Tradeoffs in Learning Fair Representations,” October. http://arxiv.org/abs/1906.08386.