0%

Book Description

Are human decisions less biased than automated ones? AI is increasingly showing up in highly sensitive areas such as healthcare, hiring, and criminal justice. Many people assume that using data to automate decisions would make everything fair, but that’s not the case. In this report, business, analytics, and data science leaders will examine the challenges of defining fairness and reducing unfair bias throughout the machine learning pipeline.

Trisha Mahoney, Kush R. Varshney, and Michael Hind from IBM explain why you need to engage early and authoritatively when building AI you can trust. You’ll learn how your organization should approach fairness and bias, including trade-offs you need to make between model accuracy and model bias. This report also introduces you to AI Fairness 360, an extensible open source toolkit for measuring, understanding, and reducing AI bias.

In this report, you’ll explore:

  • Legal, ethical, and trust factors you need to consider when defining fairness for your use case
  • Different ways to measure and remove unfair bias, using the most relevant metrics for the particular use case
  • How to define acceptable thresholds for model accuracy and unfair model bias

Table of Contents

  1. Introduction
    1. Are Human Decisions Less Biased Than Automated Ones?
    2. AI Fairness Is Becoming Increasingly Critical
    3. Defining Fairness
    4. Where Does Bias Come From?
    5. Bias and Machine Learning
    6. Can’t I Just Remove Protected Attributes?
    7. Conclusion
  2. 1. Understanding and Measuring Bias with AIF 360
    1. Tools and Terminology
      1. Terminology
    2. Which Metrics Should You Use?
      1. Individual Versus Group Fairness Metrics
      2. Worldviews and Metrics
      3. Dataset Class
    3. Transparency in Bias Metrics
      1. Explainer Class
      2. AI FactSheets
  3. 2. Algorithms for Bias Mitigation
    1. Most Bias Starts with Your Data
    2. Pre-Processing Algorithms
    3. In-Processing Algorithms
    4. Post-Processing Algorithms
    5. Continuous Pipeline Measurement
  4. 3. Python Tutorial
    1. Step 1: Import Statements
    2. Step 2: Load Dataset, Specify Protected Attribute, and Split Dataset into Train and Test
    3. Step 3: Compute Fairness Metric on Original Training Dataset
    4. Step 4: Mitigate Bias by Transforming the Original Dataset
    5. Step 5: Compute Fairness Metric on Transformed Dataset
  5. 4. Conclusion
    1. The Future of Fairness in AI