Abstract
We cooperate with other people despite the risk of being exploited or hurt. If future artificial intelligence (AI) systems are benevolent and cooperative toward us, what will we do in return? Here we show that our cooperative dispositions are weaker when we interact with AI. In nine experiments, humans interacted with either another human or an AI agent in four classic social dilemma economic games and a newly designed game of Reciprocity that we introduce here. Contrary to the hypothesis that people mistrust algorithms, participants trusted their AI partners to be as cooperative as humans. However, they did not return AI's benevolence as much and exploited the AI more than humans. These findings warn that future self-driving cars or co-working robots, whose success depends on humans' returning their cooperativeness, run the risk of being exploited. This vulnerability calls not just for smarter machines but also better human-centered policies.
Item Type: | Journal article |
---|---|
Form of publication: | Publisher's Version |
Keywords: | Computer science; Social sciences; Sociology |
Faculties: | Psychology and Education Science > Department Psychology > General and Experimental Psychology Social Sciences > Geschwister-Scholl-Institute for Political Science |
Subjects: | 000 Computer science, information and general works > 004 Data processing computer science 100 Philosophy and Psychology > 150 Psychology |
URN: | urn:nbn:de:bvb:19-epub-91978-6 |
ISSN: | 2589-0042 |
Language: | English |
Item ID: | 91978 |
Date Deposited: | 27. Apr 2022, 13:20 |
Last Modified: | 27. Apr 2022, 13:21 |