Logo Logo
Hilfe
Hilfe
Switch Language to English

Engstler, Paul; Keicher, Matthias; Schinz, David; Mach, Kristina; Gersing, Alexandra S.; Foreman, Sarah C.; Goller, Sophia S.; Weissinger, Juergen; Rischewski, Jon; Dietrich, Anna-Sophia; Wiestler, Benedikt; Kirschke, Jan S.; Khakzar, Ashkan und Navab, Nassir (2022): Interpretable Vertebral Fracture Diagnosis. 5th International Workshop, iMIMIC 2022, Held in Conjunction with MICCAI 2022, Singapore, Singapore, September 22, 2022. In: Interpretability of Machine Intelligence in Medical Image Computing. 5th International Workshop, iMIMIC 2022, Held in Conjunction with MICCAI 2022, Singapore, Singapore, September 22, 2022, Proceedings, Lecture Notes in Computer Science Bd. 13611 Cham, Switzerland: Springer. S. 71-81

Volltext auf 'Open Access LMU' nicht verfügbar.

Abstract

Do black-box neural network models learn clinically relevant features for fracture diagnosis? The answer not only establishes reliability, quenches scientific curiosity, but also leads to explainable and verbose findings that can assist the radiologists in the final and increase trust. This work identifies the concepts networks use for vertebral fracture diagnosis in CT images. This is achieved by associating concepts to neurons highly correlated with a specific diagnosis in the dataset. The concepts are either associated with neurons by radiologists pre-hoc or are visualized during a specific prediction and left for the user’s interpretation. We evaluate which concepts lead to correct diagnosis and which concepts lead to false positives. The proposed frameworks and analysis pave the way for reliable and explainable vertebral fracture diagnosis. The code is publicly available (https://github.com/CAMP-eXplain-AI/Interpretable-Vertebral-Fracture-Diagnosis).

Dokument bearbeiten Dokument bearbeiten