Abstract
Trust in media significantly affects content credibility, user engagement, and the legitimacy of our information ecosystem. Yet this already fragile foundation is eroding—fastest where it matters most in today’s digitalized world: on media platforms. One key driver of this erosion is audiovisual AI-generated content (AVAIGC): images, audio, and videos—created within seconds, increasingly realistic, and highly persuasive. In response, driven by regulatory pressure, AI-labels emerged as the most prominent transparency measure. However, it remains unclear how media users perceive labeling AVAIGC in relation to trust in media platforms, and which content sources they regard as most critical to label. Based on 15 focus group interviews with 75 users from five age cohorts, this study reveals two findings. First, labeling is perceived as trust-enhancing, but is shaped by label functionality, label credibility, content relevance, media engagement, media literacy, and AI literacy. Second, users prioritize labeling Producer-Generated-Content over User-Generated-Content and advertisements.
| Dokumententyp: | Konferenzbeitrag (Paper) |
|---|---|
| Keywords: | Audiovisual AI-generated Content ; Deepfakes ; Labeling ; Trust in Media Platforms |
| Fakultät: | Betriebswirtschaft > Institut für Digitales Management und Neue Medien > Digitale Medienunternehmen |
| Themengebiete: | 300 Sozialwissenschaften > 330 Wirtschaft |
| Ort: | Atlanta |
| Sprache: | Englisch |
| Dokumenten ID: | 128924 |
| Datum der Veröffentlichung auf Open Access LMU: | 04. Nov. 2025 10:19 |
| Letzte Änderungen: | 04. Nov. 2025 10:19 |
