A Survey on Bias and Fairness in Machine Learning

By Ninareh Mehrabi et al
Published on Jan. 25, 2022
Read the original document by opening this link in a new tab.

Table of Contents

1. Introduction
2. Real-World Examples of Algorithmic Unfairness
2.1 Systems that Demonstrate Discrimination
2.2 Assessment Tools
3. Bias in Data, Algorithms, and User Experiences
3.1 Types of Bias

Summary

The document discusses the importance of fairness in machine learning systems due to their widespread use in various applications. It highlights biases that can exist in AI systems and the need to address them to avoid discriminatory outcomes. The authors identify sources of bias in data and algorithms, and how biased outcomes can impact user experiences. Various examples of biased systems and tools for assessing fairness are presented. The document also categorizes different types of bias such as Measurement Bias, Omitted Variable Bias, Representation Bias, Aggregation Bias, Simpson's Paradox, Modifiable Areal Unit Problem, Sampling Bias, and Longitudinal Data Fallacy. It emphasizes the need for researchers and engineers to consider fairness constraints when designing AI systems.
×
This is where the content will go.