Abstract
Trust in media significantly affects content credibility, user engagement, and the legitimacy of our information ecosystem. Yet this already fragile foundation is eroding—fastest where it matters most in today’s digitalized world: on media platforms. One key driver of this erosion is audiovisual AI-generated content (AVAIGC): images, audio, and videos—created within seconds, increasingly realistic, and highly persuasive. In response, driven by regulatory pressure, AI-labels emerged as the most prominent transparency measure. However, it remains unclear how media users perceive labeling AVAIGC in relation to trust in media platforms, and which content sources they regard as most critical to label. Based on 15 focus group interviews with 75 users from five age cohorts, this study reveals two findings. First, labeling is perceived as trust-enhancing, but is shaped by label functionality, label credibility, content relevance, media engagement, media literacy, and AI literacy. Second, users prioritize labeling Producer-Generated-Content over User-Generated-Content and advertisements.
| Item Type: | Conference or Workshop Item (Paper) |
|---|---|
| Keywords: | Audiovisual AI-generated Content ; Deepfakes ; Labeling ; Trust in Media Platforms |
| Faculties: | Munich School of Management > Institute for Digital Management and New Media > Digital Media Companies |
| Subjects: | 300 Social sciences > 330 Economics |
| Place of Publication: | Atlanta |
| Language: | English |
| Item ID: | 128924 |
| Date Deposited: | 04. Nov 2025 10:19 |
| Last Modified: | 04. Nov 2025 10:19 |
