Abstract
Reputation is a central element of social communications, be it with human or artificial intelligence (AI), and as such can be the primary target of malicious communication strategies. There is already a vast amount of literature on trust networks and their dynamics using Bayesian principles and involving Theory of Mind models. An issue for these simulations can be the amount of information that can be stored and managed using discretizing variables and hard thresholds. Here a novel approach to the way information is updated that accounts for knowledge uncertainty and is closer to reality is proposed. Agents use information compression techniques to capture their complex environment and store it in their finite memories. The loss of information that results from this leads to emergent phenomena, such as echo chambers, self-deception, deception symbiosis, and freezing of group opinions. Various malicious strategies of agents are studied for their impact on group sociology, like sycophancy, egocentricity, pathological lying, and aggressiveness. Our set-up already provides insights into social interactions and can be used to investigate the effects of various communication strategies and find ways to counteract malicious ones. Eventually this work should help to safeguard the design of non-abusive AI systems.
Dokumententyp: | Zeitschriftenartikel |
---|---|
Fakultät: | Physik |
Themengebiete: | 500 Naturwissenschaften und Mathematik > 530 Physik |
ISSN: | 0003-3804 |
Sprache: | Englisch |
Dokumenten ID: | 111520 |
Datum der Veröffentlichung auf Open Access LMU: | 02. Apr. 2024, 07:27 |
Letzte Änderungen: | 02. Apr. 2024, 07:27 |