Logo Logo
Hilfe
Hilfe
Switch Language to English

Eiband, Malin; Buschek, Daniel; Kremer, Alexander und Hussmann, Heinrich (2019): The Impact of Placebic Explanations on Trust in Intelligent Systems. In: Chi Ea '19 Extended Abstracts: Extended Abstracts of the 2019 Chi Conference on Human Factors in Computing Systems

Volltext auf 'Open Access LMU' nicht verfügbar.

Abstract

Work in social psychology on interpersonal interaction [5] has demonstrated that people are more likely to comply to a request if they are presented with a justification - even if this justification conveys no information. In the light of the many calls for explaining reasoning of interactive intelligent systems to users, we investigate whether this effect holds true for human-computer interaction. Using a prototype of a nutrition recommender, we conducted a lab study (N=30) between three groups (no explanation, placebic explanation, and real explanation). Our results indicate that placebic explanations for algorithmic decision-making may indeed invoke perceived levels of trust similar to real explanations. We discuss how placebic explanations could be considered in future work.

Dokument bearbeiten Dokument bearbeiten