Abstract
With impressive developments in human-robot interaction it may seem that technology can do anything. Especially in the domain of social robots which suggest to be much more than programmed machines because of their anthropomorphic shape, people may overtrust the robot's actual capabilities and its reliability. This presents a serious problem, especially when personal well-being might be at stake. Hence, insights about the development and influencing factors of overtrust in robots may form an important basis for countermeasures and sensible design decisions. An empirical study [N = 110] explored the development of overtrust using the example of a pet feeding robot. A 2 x 2 experimental design and repeated measurements contrasted the effect of one's own experience, skill demonstration, and reputation through experience reports of others. The experiment was realized in a video environment where the participants had to imagine they were going on a four-week safari trip and leaving their beloved cat at home, making use of a pet feeding robot. Every day, the participants had to make a choice: go to a day safari without calling options (risk and reward) or make a boring car trip to another village to check if the feeding was successful and activate an emergency call if not (safe and no reward). In parallel to cases of overtrust in other domains (e.g., autopilot), the feeding robot performed flawlessly most of the time until in the fourth week;it performed faultily on three consecutive days, resulting in the cat's death if the participants had decided to go for the day safari on these days. As expected, with repeated positive experience about the robot's reliability on feeding the cat, trust levels rapidly increased and the number of control calls decreased. Compared to one's own experience, skill demonstration and reputation were largely neglected or only had a temporary effect. We integrate these findings in a conceptual model of (over)trust over time and connect these to related psychological concepts such as positivism, instant rewards, inappropriate generalization, wishful thinking, dissonance theory, and social concepts from human-human interaction. Limitations of the present study as well as implications for robot design and future research are discussed.
Item Type: | Journal article |
---|---|
Keywords: | Human–robot interaction; overtrust; prior experience; reputation; demonstration; psychological perspective |
Faculties: | Psychology and Education Science > Department Psychology |
Subjects: | 100 Philosophy and Psychology > 150 Psychology |
URN: | urn:nbn:de:bvb:19-epub-102709-4 |
ISSN: | 2296-9144 |
Language: | English |
Item ID: | 102709 |
Date Deposited: | 05. Jun 2023, 15:40 |
Last Modified: | 30. Nov 2023, 18:22 |