Logo Logo
Hilfe
Hilfe
Switch Language to English

Haliburton, Luke ORCID logoORCID: https://orcid.org/0000-0002-5654-2453; Leusmann, Jan; Welsch, Robin; Ghebremedhin, Sinksar; Isaakidis, Petros; Schmidt, Albrecht ORCID logoORCID: https://orcid.org/0000-0003-3890-1990 und Mayer, Sven ORCID logoORCID: https://orcid.org/0000-0001-5462-8782 (2024): Uncovering labeler bias in machine learning annotation tasks. In: AI and Ethics [Forthcoming]

Volltext auf 'Open Access LMU' nicht verfügbar.

Abstract

As artificial intelligence becomes increasingly pervasive, it is essential that we understand the implications of bias in machine learning. Many developers rely on crowd workers to generate and annotate datasets for machine learning applications. However, this step risks embedding training data with labeler bias, leading to biased decision-making in systems trained on these datasets. To characterize labeler bias, we created a face dataset and conducted two studies where labelers of different ethnicity and sex completed annotation tasks. In the first study, labelers annotated subjective characteristics of faces. In the second, they annotated images using bounding boxes. Our results demonstrate that labeler demographics significantly impact both subjective and accuracy-based annotations, indicating that collecting a diverse set of labelers may not be enough to solve the problem. We discuss the consequences of these findings for current machine learning practices to create fair and unbiased systems.

Dokument bearbeiten Dokument bearbeiten