Abstract
In this article, we present new empirical evidence to demonstrate the severe limitations of existing machine learning content moderation methods to keep pace with, let alone stay ahead of, hateful language online. Building on the collaborative coding project “AI4Dignity” we outline the ambiguities and complexities of annotating problematic text in AI-assisted moderation systems. We diagnose the shortcomings of the content moderation and natural language processing approach as emerging from a broader epistemological trapping wrapped in the liberal-modern idea of “the human”. Presenting a decolonial critique of the “human vs machine” conundrum and drawing attention to the structuring effects of coloniality on extreme speech, we propose “ethical scaling” to highlight moderation process as political praxis. As a normative framework for platform governance, ethical scaling calls for a transparent, reflexive, and replicable process of iteration for content moderation with community participation and global parity, which should evolve in conjunction with addressing algorithmic amplification of divisive content and resource allocation for content moderation.
Dokumententyp: | Zeitschriftenartikel |
---|---|
EU Funded Grant Agreement Number: | 957442 |
Publikationsform: | Publisher's Version |
Keywords: | AI, extreme speech, ethical scaling, decoloniality, social media content moderation, ethnography and algorithm auditing |
Fakultät: | Kulturwissenschaften |
Themengebiete: | 300 Sozialwissenschaften > 300 Sozialwissenschaft, Soziologie |
URN: | urn:nbn:de:bvb:19-epub-104920-7 |
ISSN: | 2053-9517 |
Sprache: | Englisch |
Dokumenten ID: | 104920 |
Datum der Veröffentlichung auf Open Access LMU: | 25. Jul. 2023, 06:13 |
Letzte Änderungen: | 21. Dez. 2023, 17:38 |