There are two doctrines in anti-discrimination law: disparate treatment, and disparate impact. Let's take a minute to look at each of these:
After this is done, the plaintiff has the opportunity to show that the goal of the procedure could also be achieved with a different procedure that shows a smaller disparity.
Note: For a more in-depth overview of these topics, see Moritz Hardt's 2017 NeurIPS presentation on the topic at http://mrtz.org/nips17/#/11.
The disparate treatment doctrine tries to achieve procedural fairness and equal opportunity. The disparate impact doctrine aims for distributive justice and minimized inequality in outcomes.
There is an intrinsic tension between the two doctrines, as illustrated by the Ricci V. DeStefano case from 2009. In this case, 19 white firefighters and 1 Hispanic firefighter sued their employer, the New Haven Fire Department. The firefighters had all passed their test for promotion, yet their black colleagues did not score the mark required for the promotion. Fearing a disparate impact lawsuit, the city invalidated the test results and did not promote the firefighters. Because the evidence for disparate impact was not strong enough, the Supreme Court of the United States eventually ruled that the firefighters should have been promoted.
Given the complex legal and technical situation around fairness in machine learning, we're going to dive into how we can define and quantify fairness, before using this insight to create fairer models.