Speciesist Bias in AI

By Thilo Hagendorff et al
Read the original document by opening this link in a new tab.

Table of Contents

1. Abstract - Massive efforts are made to reduce biases in both data and algorithms in order to render AI applications fair. 2. Introduction - Currently, AI ethics is mute about the impact of AI technologies on nonhuman animals...

Summary

Massive efforts are made to reduce biases in both data and algorithms in order to render AI applications fair. This paper addresses the 'speciesist bias' and investigates it in several different AI systems. Speciesist biases are learned and solidified by AI applications when they are trained on datasets in which speciesist patterns prevail. These patterns can be found in image recognition systems, large language models, and recommender systems. Therefore, AI technologies currently play a significant role in perpetuating and normalizing violence against animals. This paper is intended to shrink this gap by critically commenting on current fairness research in AI, by introducing the term 'speciesist bias' and by investigating examples of speciesist biases in existing AI applications in the fields of image recognition, language models, as well as recommender systems. Discrimination against animals is explored, emphasizing the unjust treatment of different categories of individuals based on species membership. The paper delves into the moral considerations of nonhuman animals in the context of AI ethics, highlighting the need to widen the scope of AI fairness frameworks to include mitigation measures for speciesist biases.
×
This is where the content will go.