Logo Logo
Hilfe
Hilfe
Switch Language to English

Singh, Devesh; Brima, Yusuf; Levin, Fedor; Becker, Martin; Hiller, Bjarne; Hermann, Andreas; Villar-Munoz, Irene; Beichert, Lukas; Bernhardt, Alexander; Buerger, Katharina; Butryn, Michaela; Dechent, Peter; Düzel, Emrah; Ewers, Michael; Fliessbach, Klaus; Freiesleben, Silka D. ORCID logoORCID: https://orcid.org/0000-0002-2141-8671; Glanz, Wenzel ORCID logoORCID: https://orcid.org/0000-0002-5865-4176; Hetzer, Stefan ORCID logoORCID: https://orcid.org/0000-0002-1773-1518; Janowitz, Daniel ORCID logoORCID: https://orcid.org/0009-0003-4090-547X; Görß, Doreen; Kilimann, Ingo ORCID logoORCID: https://orcid.org/0000-0002-3269-4452; Kimmich, Okka ORCID logoORCID: https://orcid.org/0009-0008-2119-7590; Laske, Christoph; Levin, Johannes ORCID logoORCID: https://orcid.org/0000-0001-5092-4306; Lohse, Andrea; Luesebrink, Falk ORCID logoORCID: https://orcid.org/0000-0001-5770-0727; Munk, Matthias; Perneczky, Robert ORCID logoORCID: https://orcid.org/0000-0003-1981-7435; Peters, Oliver ORCID logoORCID: https://orcid.org/0000-0003-0568-2998; Preis, Lukas; Priller, Josef; Prudlo, Johannes; Prychynenko, Diana; Rauchmann, Boris S. ORCID logoORCID: https://orcid.org/0000-0003-4547-6240; Rostamzadeh, Ayda; Roy-Kluth, Nina; Scheffler, Klaus; Schneider, Anja; Droste zu Senden, Louise; Schott, Björn H.; Spottke, Annika; Synofzik, Matthis; Wiltfang, Jens; Jessen, Frank; Weber, Marc-André; Teipel, Stefan J. und Dyrba, Martin ORCID logoORCID: https://orcid.org/0000-0002-3353-3167 (2025): An unsupervised XAI framework for dementia detection with context enrichment. In: Scientific Reports, Bd. 15, 39554 [PDF, 4MB]

[thumbnail of s41419-025-08147-1.pdf]
Vorschau
Creative Commons: Namensnennung 4.0 (CC-BY)
Veröffentlichte Version

Abstract

Explainable Artificial Intelligence (XAI) methods enhance the diagnostic efficiency of clinical decision support systems by making the predictions of a convolutional neural network’s (CNN) on brain imaging more transparent and trustworthy. However, their clinical adoption is limited due to limited validation of the explanation quality. Our study introduces a framework that evaluates XAI methods by integrating neuroanatomical morphological features with CNN-generated relevance maps for disease classification. We trained a CNN using brain MRI scans from six cohorts: ADNI, AIBL, DELCODE, DESCRIBE, EDSD, and NIFD (N = 3253), including participants that were cognitively normal, with amnestic mild cognitive impairment, dementia due to Alzheimer’s disease and frontotemporal dementia. Clustering analysis benchmarked different explanation space configurations by using morphological features as proxy-ground truth. We implemented three post-hoc explanations methods: (i) by simplifying model decisions, (ii) explanation-by-example, and (iii) textual explanations. A qualitative evaluation by clinicians (N = 6) was performed to assess their clinical validity. Clustering performance improved in morphology enriched explanation spaces, improving both homogeneity and completeness of the clusters. Post hoc explanations by model simplification largely delineated converters and stable participants, while explanation-by-example presented possible cognition trajectories. Textual explanations gave rule-based summarization of pathological findings. Clinicians’ qualitative evaluation highlighted challenges and opportunities of XAI for different clinical applications. Our study refines XAI explanation spaces and applies various approaches for generating explanations. Within the context of AI-based decision support system in dementia research we found the explanations methods to be promising towards enhancing diagnostic efficiency, backed up by the clinical assessments.

Dokument bearbeiten Dokument bearbeiten