# Equality of Opportunity in Supervised Learning

@inproceedings{Hardt2016EqualityOO, title={Equality of Opportunity in Supervised Learning}, author={Moritz Hardt and Eric Price and Nathan Srebro}, booktitle={NIPS}, year={2016} }

We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. [...] Key Method Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. We enourage readers to consult the more complete manuscript on the arXiv. Expand

#### Supplemental Code

Github Repo

Via Papers with Code

A Python package to assess and improve fairness of machine learning models.

#### Figures and Topics from this paper

#### Paper Mentions

#### 1,790 Citations

Fairness in Supervised Learning: An Information Theoretic Approach

- Computer Science, Mathematics
- 2018 IEEE International Symposium on Information Theory (ISIT)
- 2018

This work presents an information theoretic framework for designing fair predictors from data, which aim to prevent discrimination against a specified sensitive attribute in a supervised learning setting and uses equalized odds as the criterion for discrimination. Expand

Taking Advantage of Multitask Learning for Fair Classification

- Computer Science, Mathematics
- AIES
- 2019

This paper proposes to use Multitask Learning (MTL), enhanced with fairness constraints, to jointly learn group specific classifiers that leverage information between sensitive groups and proposes a three-pronged approach to tackle fairness, by increasing accuracy on each group, enforcing measures of fairness during training, and protecting sensitive information during testing. Expand

A Distributionally Robust Approach to Fair Classification

- Computer Science, Mathematics
- ArXiv
- 2020

A distributionally robust logistic regression model with an unfairness penalty that prevents discrimination with respect to sensitive attributes such as gender or ethnicity is proposed and it is demonstrated that the resulting classifier improves fairness at a marginal loss of predictive accuracy on both synthetic and real datasets. Expand

A Framework for Benchmarking Discrimination-Aware Models in Machine Learning

- Computer Science
- AIES
- 2019

Experimental results show that the quality of techniques can be assessed through known metrics of discrimination, and the flexible framework can be extended to most real datasets and fairness measures to support a diversity of assessments. Expand

On preserving non-discrimination when combining expert advice

- Computer Science, Mathematics
- NeurIPS
- 2018

We study the interplay between sequential decision making and avoiding discrimination against protected groups, when examples arrive online and do not follow distributional assumptions. We consider… Expand

Fair Selective Classification Via Sufficiency

- Computer Science
- ICML
- 2021

It is proved that the sufficiency criterion can be used to mitigate disparities between groups by ensuring that selective classification increases performance on all groups, and introduced a method for mitigating the disparity in precision across the entire coverage scale based on this criterion. Expand

Leveraging Labeled and Unlabeled Data for Consistent Fair Binary Classification

- Computer Science, Mathematics
- NeurIPS
- 2019

It is shown that the fair optimal classifier is obtained by recalibrating the Bayes classifier by a group-dependent threshold and the overall procedure is shown to be statistically consistent both in terms of the classification error and fairness measure. Expand

Eliminating Latent Discrimination: Train Then Mask

- Computer Science, Mathematics
- SSRN Electronic Journal
- 2019

This paper defines a new operational fairness criteria, inspired by the well-understood notion of omitted variable-bias in statistics and econometrics, and establishes analytical and algorithmic results about the existence of a fair classifier in the context of supervised learning. Expand

Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce Discrimination

- Computer Science, Mathematics
- ArXiv
- 2020

A framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data, a re-sampling method to obtain multiple fair datasets and lastly, ensemble learning to improve accuracy and decrease discrimination are presented. Expand

Wasserstein Fair Classification

- Computer Science, Mathematics
- UAI
- 2019

An approach to fair classification is proposed that enforces independence between the classifier outputs and sensitive information by minimizing Wasserstein-1 distances and is robust to specific choices of the threshold used to obtain class predictions from model outputs. Expand

#### References

SHOWING 1-10 OF 28 REFERENCES

Learning Fair Representations

- Mathematics, Computer Science
- ICML
- 2013

We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the… Expand

On the relation between accuracy and fairness in binary classification

- Computer Science
- ArXiv
- 2015

It is argued that comparison of non-discriminatory classifiers needs to account for different rates of positive predictions, otherwise conclusions about performance may be misleading, because accuracy and discrimination of naive baselines on the same dataset vary with different ratesof positive predictions. Expand

Discrimination-aware data mining

- Computer Science
- KDD
- 2008

This approach leads to a precise formulation of the redlining problem along with a formal result relating discriminatory rules with apparently safe ones by means of background knowledge, and an empirical assessment of the results on the German credit dataset. Expand

Learning Fair Classifiers

- Computer Science, Mathematics
- 2015

This paper introduces a flexible mechanism to design fair classifiers in a principled manner and instantiates this mechanism on three well-known classifiers -- logistic regression, hinge loss and linear and nonlinear support vector machines. Expand

Building Classifiers with Independency Constraints

- Computer Science
- 2009 IEEE International Conference on Data Mining Workshops
- 2009

This paper studies the classification with independency constraints problem: find an accurate model for which the predictions are independent from a given binary attribute and proposes two solutions and presents an empirical validation. Expand

Inherent Trade-Offs in the Fair Determination of Risk Scores

- Computer Science, Mathematics
- ITCS
- 2017

Some of the ways in which key notions of fairness are incompatible with each other are suggested, and hence a framework for thinking about the trade-offs between them is provided. Expand

The Variational Fair Autoencoder

- Mathematics, Computer Science
- ICLR
- 2016

This model is based on a variational autoencoding architecture with priors that encourage independence between sensitive and latent factors of variation that is more effective than previous work in removing unwanted sources of variation while maintaining informative latent representations. Expand

Certifying and Removing Disparate Impact

- Computer Science, Mathematics
- KDD
- 2015

This work links disparate impact to a measure of classification accuracy that while known, has received relatively little attention and proposes a test for disparate impact based on how well the protected class can be predicted from the other attributes. Expand

Fairness through awareness

- Mathematics, Computer Science
- ITCS '12
- 2012

A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented. Expand

Big Data's Disparate Impact

- Sociology
- 2016

Advocates of algorithmic techniques like data mining argue that these techniques eliminate human biases from the decision-making process. But an algorithm is only as good as the data it works with.… Expand