Abstract
Content-based techniques for credibility assessment (Criteria-Based Content Analysis [CBCA], Reality Monitoring [RM]) have been shown to distinguish between experience-based and fabricated statements in previous meta-analyses. New simulations raised the question whether these results are reliable revealing that using meta-analytic methods on biased datasets lead to false-positive rates of up to 100%. By assessing the performance of and applying different bias-correcting meta-analytic methods on a set of 71 studies we aimed for more precise effect size estimates. According to the sole bias-correcting meta-analytic method that performed well under a priori specified boundary conditions, CBCA and RM distinguished between experience-based and fabricated statements. However, great heterogeneity limited precise point estimation (i.e., moderate to large effects). In contrast, Scientific Content Analysis (SCAN)-another content-based technique tested-failed to discriminate between truth and lies. It is discussed how the gap between research on and forensic application of content-based credibility assessment may be narrowed.
Dokumententyp: | Zeitschriftenartikel |
---|---|
Fakultät: | Psychologie und Pädagogik > Department Psychologie |
Themengebiete: | 100 Philosophie und Psychologie > 150 Psychologie |
ISSN: | 0888-4080 |
Sprache: | Englisch |
Dokumenten ID: | 100898 |
Datum der Veröffentlichung auf Open Access LMU: | 05. Jun. 2023, 15:36 |
Letzte Änderungen: | 05. Jun. 2023, 15:36 |