Abstract
Robots are becoming an integral part of society, and might soon take on roles involving making morally relevant decisions. In a pre-registered experiment (n = 184), we investigated which factors modulate the extent to which we trust a robot to make a moral choice. Specifically, the effects of anthropomorphic appearance and anthropomorphic agency and affect attributions were assessed. Participants were presented with moral dilemmas in which the individual having to make a decision was a humanoid or mechanical robot. Each robot was described in vignettes in which they were attributed with agency and/or affective states. Subsequently, participants' implicit moral trust in the robot was measured, as well as explicit trust, perceived capability of the robot, and the extent to which they felt the robot was responsible for its choice. Both agency and affective state attributions were found to impact participants' implicit and explicit trust as well as the perceived capability of the robot. Moreover, across conditions, mechanical robots were trusted significantly more than humanoid robots to take moral choices.
Dokumententyp: | Zeitschriftenartikel |
---|---|
Fakultät: | Psychologie und Pädagogik > Department Psychologie |
Themengebiete: | 100 Philosophie und Psychologie > 150 Psychologie |
ISSN: | 0737-0024 |
Sprache: | Englisch |
Dokumenten ID: | 110378 |
Datum der Veröffentlichung auf Open Access LMU: | 02. Apr. 2024, 07:17 |
Letzte Änderungen: | 02. Apr. 2024, 07:17 |