Model fairness

November 29, 2018 — April 21, 2022

adversarial
game theory
machine learning
wonk
Figure 1: One of history’s more notorious adventures in classifiers; Francois de Halleux at the Apartheid Museum

Which utilitarian ethical criteria does my model satisfy?

Consider the cautionary tale Automated Inference on Criminality using Face Images (Wu and Zhang 2016)

[…] we find some discriminating structural features for predicting criminality, such as lip curvature, eye inner corner distance, and the so-called nose-mouth angle. Above all, the most important discovery of this research is that criminal and non-criminal face images populate two quite distinctive manifolds. The variation among criminal faces is significantly greater than that of the non-criminal faces. The two manifolds consisting of criminal and non-criminal faces appear to be concentric, with the non-criminal manifold lying in the kernel with a smaller span, exhibiting a law of normality for faces of non-criminals. In other words, the faces of the general law-abiding public have a greater degree of resemblance compared with the faces of criminals, or criminals have a higher degree of dissimilarity in facial appearance than normal people.

Which lessons would you be happy with your local law enforcement authority taking home from this?

Maybe the in-progress textbook will have something to say? Solon Barocas, Moritz Hardt, Arvind Narayanan Fairness and machine learning.

Or maybe I want to do a post hoc analysis on whether my model was in fact using fair criteria when it made a decision. Model interpretation might help with that.

1 Think pieces on fairness in models in practice

2 Bias in data

  • Excavating AI: The Politics of Images in Machine Learning Training Sets, by Kate Crawford and Trevor Paglen

3 Fairness and causal reasoning

Here’s a thing that was so simple and necessary I assumed it had already been done long before it was. (Kilbertus et al. 2017)

Recent work on fairness in machine learning has focused on various statistical discrimination criteria and how they trade off. Most of these criteria are observational: They depend only on the joint distribution of predictor, protected attribute, features, and outcome. While convenient to work with, observational criteria have severe inherent limitations that prevent them from resolving matters of fairness conclusively.

Going beyond observational criteria, we frame the problem of discrimination based on protected attributes in the language of causal reasoning. This viewpoint shifts attention from “What is the right fairness criterion?” to “What do we want to assume about the causal data generating process?” Through the lens of causality, we make several contributions. First, we crisply articulate why and when observational criteria fail, thus formalising what was before a matter of opinion. Second, our approach exposes previously ignored subtleties and why they are fundamental to the problem. Finally, we put forward natural causal non-discrimination criteria and develop algorithms that satisfy them.

4 Fairness-accuracy trade-offs

There are certain impossibility theorems around what we can do. That is, let us assume we have a perfectly unbiased dataset and an efficient algorithm to exploit it for the best possible accuracy (which is extremely non-trivial to get but let us assume). How accurate can we be if we constrain our model to use only fair solutions (for some value of fairness) even if it reduces the accuracy by being blind to features which are informative about the question? The fairness-accuracy trade-offs quantify the “cost” of fairness in terms of reduced accuracy, so we can quantify various possible degrees of trade-offs. There are lots of very beautiful results in this area (Menon and Williamson 2018; Wang et al. 2021).

In a certain sense, the only fair model is no model at all. Who should our automated model extend a loan to? Everyone! No one! All other decision rules impinge upon the impenetrable thicket of cause and effect and historical after-effects that characterise human moral calculus.

Chris Tucchio, at crunch conf makes some points about marginalist allocative/procedural fairness and net utility versus group rights.

If we choose to service Hyderabad with no disparities, we’ll run out of money and stop serving Hyderabad. The other NBFCs won’t.

Net result: Hyderabad is redlined by competitors and still gets no service.

Our choice: Keep the fraudsters out, utilitarianism over group rights.

He does a good job of explaining some impossibility theorems via examples, esp (Kleinberg, Mullainathan, and Raghavan 2016). Note the interesting intersection of two types of classifications implicit in his model — uniformly reject, versus biased accept/reject, subject to capital constraints. I need to revisit that and think some more.

Han Zhao is an actual researcher in this area. Inherent Tradeoffs in Learning Fair Representations, including two of their own results Zhao et al. (2019);Zhao and Gordon (2019).

Figure 2: Han Zhao on statistical parity

In practice, argues (Hutter 2019), the beauty of these theorems can hide the messiness of reality, where the definition of fairness and even the accuracy objective are both underspecified. This leaves the door open to the parameters of our fairness constraint and our model objective jointly to set the arbitrary parameters such that they can reduce discrepancy.

5 Fairness criteria

And in fact, what even is fairness? Turns out that there are lots of difficulties with codifying it.

Hedden (2021) has recently argued that many recent attempts are incoherent. Loi et al. (2021) attempt to salvage fairness by distinguishing group and individual fairness.

6 Beauty contest problems and mythic fairness

🏗 think about fairness problems that arise when the model is supposed to be rewarded on the basis of being a good bet for the future, which is to say, when it is choosing people for participation in a self-fulfilling prophecy. Models that are supposed to predict credit risk have a feedback/reinforcing dimension — people in a poverty trap are bad credit risks, even if they got into the poverty trap because of lack of credit, and despite the fact that if they were not in a poverty trap they might not be bad credit risks. Of course, also people who have a raging meth addiction and will spend all the loans on drugs are in the trap. A beauty contest problem is a model for this kind of situation, although there is a time-dimension also. There is presumably a game-theory equilibrium problem. One imagines the Chinese restaurant process or something like it popping up, perhaps even the classic Pareto distribution or other Matthew Effect models.

7 Matthew effects

Related but I think distinct from beauty-contest problems. Algorithmic decisions as part of a larger feedback loop. Venkatasubramanian et al. (2021)’s abstract:

As ML systems have become more broadly adopted in high-stakes settings, our scrutiny of them should reflect their greater impact on real lives. The field of fairness in data mining and machine learning has blossomed in the last decade, but most of the attention has been directed at tabular and image data. In this tutorial, we will discuss recent advances in network fairness. Specifically, we focus on problems where one’s position in a network holds predictive value (e.g., in a classification or regression setting) and favorable network position can lead to a cascading loop of positive outcomes, leading to increased inequality. We start by reviewing important sociological notions such as social capital, information access, and influence, as well as the now-standard definitions of fairness in ML settings. We will discuss the formalizations of these concepts in the network fairness setting, presenting recent work in the field, and future directions.

8 Compliance

  • Parity.ai looks interesting for showing processes have certain types of fairness.

9 References

Aggarwal, and Yu. 2008. A General Survey of Privacy-Preserving Data Mining Models and Algorithms.” In Privacy-Preserving Data Mining. Advances in Database Systems 34.
Barocas, and Selbst. 2016. Big Data’s Disparate Impact.” SSRN Scholarly Paper ID 2477899.
Berk. 2021. Artificial Intelligence, Predictive Policing, and Risk Assessment for Law Enforcement.” Annual Review of Criminology.
Berk, Kuchibhotla, and Tchetgen Tchetgen. 2023. Fair Risk Algorithms.” Annual Review of Statistics and Its Application.
Black, Koepke, Kim, et al. 2023. Less Discriminatory Algorithms.” SSRN Scholarly Paper.
Burrell. 2016. How the Machine ’Thinks’: Understanding Opacity in Machine Learning Algorithms.” Big Data & Society.
Cooper, and Abrams. 2021. Emergent Unfairness in Algorithmic Fairness-Accuracy Trade-Off Research.” In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society.
Dressel, and Farid. 2018. The Accuracy, Fairness, and Limits of Predicting Recidivism.” Science Advances.
Dutta, Wei, Yueksel, et al. 2020. Is There a Trade-Off Between Fairness and Accuracy? A Perspective Using Mismatched Hypothesis Testing.” In Proceedings of the 37th International Conference on Machine Learning.
Dwork, Hardt, Pitassi, et al. 2012. Fairness Through Awareness.” In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference. ITCS ’12.
Feldman, Friedler, Moeller, et al. 2015. Certifying and Removing Disparate Impact.” In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. KDD ’15.
Ghorbani, and Zou. 2019. Data Shapley: Equitable Valuation of Data for Machine Learning.”
Gopalan, Hu, Kim, et al. 2022. Loss Minimization Through the Lens of Outcome Indistinguishability.”
Hama, Mase, and Owen. 2022. Model Free Shapley Values for High Dimensional Data.”
Hardt, Price, and Srebro. 2016. Equality of Opportunity in Supervised Learning.” In Advances in Neural Information Processing Systems.
Hardt, and Recht. 2021. Patterns, Predictions, and Actions: A Story about Machine Learning.” arXiv:2102.05242 [Cs, Stat].
Hedden. 2021. On Statistical Criteria of Algorithmic Fairness.” Philosophy & Public Affairs.
Hidalgo, Orghian, Albo Canals, et al. 2021. How Humans Judge Machines.
Hutter. 2019. Fairness Without Regret.” arXiv:1907.05159 [Cs, Stat].
Karimi, Barthe, Schölkopf, et al. 2021. A Survey of Algorithmic Recourse: Definitions, Formulations, Solutions, and Prospects.”
Kilbertus, Rojas Carulla, Parascandolo, et al. 2017. Avoiding Discrimination Through Causal Reasoning.” In Advances in Neural Information Processing Systems 30.
Kleinberg, Mullainathan, and Raghavan. 2016. Inherent Trade-Offs in the Fair Determination of Risk Scores.”
Laufer. 2020a. Compounding Injustice: History and Prediction in Carceral Decision-Making.” arXiv:2005.13404 [Cs, Stat].
———. 2020b. Feedback Effects in Repeat-Use Criminal Risk Assessments.” arXiv:2011.14075 [Cs, Stat].
Liu, and Vicente. 2020. Accuracy and Fairness Trade-Offs in Machine Learning: A Stochastic Multi-Objective Approach.”
Loi, Viganò, Hertweck, et al. 2021. People Are Not Coins: A Reply to Hedden. SSRN Scholarly Paper 3857889.
Lundberg, Scott M., Erion, Chen, et al. 2020. From Local Explanations to Global Understanding with Explainable AI for Trees.” Nature Machine Intelligence.
Lundberg, Scott M, and Lee. 2017. A Unified Approach to Interpreting Model Predictions.” In Advances in Neural Information Processing Systems.
Menon, and Williamson. 2018. The Cost of Fairness in Binary Classification.” In Proceedings of the 1st Conference on Fairness, Accountability and Transparency.
Miconi. 2017. The Impossibility of ‘Fairness’: A Generalized Impossibility Result for Decisions.”
Mishler, and Kennedy. 2021. FADE: FAir Double Ensemble Learning for Observable and Counterfactual Outcomes.” arXiv:2109.00173 [Cs, Stat].
Mitchell, Potash, Barocas, et al. 2021. Algorithmic Fairness: Choices, Assumptions, and Definitions.” Annual Review of Statistics and Its Application.
O’Neil. 2017. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.
Parkes, Vohra, and participants. 2019. Algorithmic and Economic Perspectives on Fairness.” arXiv:1909.05282 [Cs].
Pleiss, Raghavan, Wu, et al. 2017. On Fairness and Calibration.” In Advances In Neural Information Processing Systems.
Raghavan. 2021. The Societal Impacts of Algorithmic Decision-Making.”
Sweeney. 2013. Discrimination in Online Ad Delivery.” Queue.
Venkatasubramanian, Scheidegger, Friedler, et al. 2021. Fairness in Networks: Social Capital, Information Access, and Interventions.” In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. KDD ’21.
Verma, and Rubin. 2018. Fairness Definitions Explained.” In Proceedings of the International Workshop on Software Fairness. FairWare ’18.
Wang, Wang, Beutel, et al. 2021. Understanding and Improving Fairness-Accuracy Trade-Offs in Multi-Task Learning.” In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining.
Wisdom, Powers, Pitton, et al. 2016. Interpretable Recurrent Neural Networks Using Sequential Sparse Recovery.” In Advances in Neural Information Processing Systems 29.
Wu, and Zhang. 2016. Automated Inference on Criminality Using Face Images.” arXiv:1611.04135 [Cs].
Zemel, Wu, Swersky, et al. 2013. Learning Fair Representations.” In Proceedings of the 30th International Conference on Machine Learning (ICML-13).
Zhao, Coston, Adel, et al. 2019. Conditional Learning of Fair Representations.” In.
Zhao, and Gordon. 2019. Inherent Tradeoffs in Learning Fair Representations.” arXiv:1906.08386 [Cs, Stat].