Logo Logo
Hilfe
Hilfe
Switch Language to English

Tran, Manuel; Lahiani, Amal; Dicente Cid, Yashin; Boxberg, Melanie; Lienemann, Peter; Matek, Christian; Wagner, Sophia J.; Theis, Fabian J. ORCID logoORCID: https://orcid.org/0000-0002-2419-1943; Klaiman, Eldad und Peng, Tingying (2023): B-Cos Aligned Transformers Learn Human-Interpretable Features. 26th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Vancouver, Canada, 08. - 12. Oktober 2023. Greenspan, Hayit; Madabhushi, Anant; Mousavi, Parvin; Salcudean, Septimiu; Duncan, James; Syeda-Mahmood, Tanveer und Taylor, Russell (Hrsg.): In: Medical Image Computing and Computer Assisted Intervention – MICCAI 2023, Lecture Notes in Computer Science Bd. 14227 Cham: Springer. S. 514-524

Volltext auf 'Open Access LMU' nicht verfügbar.

Abstract

Vision Transformers (ViTs) and Swin Transformers (Swin) are currently state-of-the-art in computational pathology. However, domain experts are still reluctant to use these models due to their lack of interpretability. This is not surprising, as critical decisions need to be transparent and understandable. The most common approach to understanding transformers is to visualize their attention. However, attention maps of ViTs are often fragmented, leading to unsatisfactory explanations. Here, we introduce a novel architecture called the B-cos Vision Transformer (BvT) that is designed to be more interpretable. It replaces all linear transformations with the B-cos transform to promote weight-input alignment. In a blinded study, medical experts clearly ranked BvTs above ViTs, suggesting that our network is better at capturing biomedically relevant structures. This is also true for the B-cos Swin Transformer (Bwin). Compared to the Swin Transformer, it even improves the F1-score by up to 4.7% on two public datasets.

Dokument bearbeiten Dokument bearbeiten