Unfairness by Algorithm: Distilling the Harms of Automated Decision-Making

By L. Smith et al
Published on Dec. 10, 2017
Read the original document by opening this link in a new tab.

Table of Contents

Overview
The Chart of Potential Mitigation Sets
The Chart of Potential Harms from Automated Decision-Making
Filter Bubbles
Stereotype Reinforcement
Individual Harms
Collective/Societal Harms
Economic Loss
Social Detriment
Loss of Liberty
Working Definitions: Harms
Working Definitions: Mitigation
Reviewed Literature

Summary

Analysis of personal data can be used to improve services, advance research, and combat discrimination. However, concerns about differential treatment of individuals or harmful impacts on vulnerable communities arise, especially with sensitive data used in automated decision-making. Discussions on legal and ethical issues surrounding sensitive data use have been ongoing. The document categorizes harms and mitigation strategies related to automated decision-making. It aims to empower solutions to mitigate these harms and focuses on potential strategies to reduce risks of algorithmic discrimination.
×
This is where the content will go.