Abstract
Modeling temporal changes in subcortical structures is crucial for a better understanding of the progression of Alzheimer’s disease (AD). Given their flexibility to adapt to heterogeneous sequence lengths, mesh-based transformer architectures have been proposed in the past for predicting hippocampus deformations across time. However, one of the main limitations of transformers is the large amount of trainable parameters, which makes the application on small datasets very challenging. In addition, current methods do not include relevant non-image information that can help to identify AD-related patterns in the progression. To this end, we introduce CASHformer, a transformer-based framework to model longitudinal shape trajectories in AD. CASHformer incorporates the idea of pre-trained transformers as universal compute engines that generalize across a wide range of tasks by freezing most layers during fine-tuning. This reduces the number of parameters by over 90% with respect to the original model and therefore enables the application of large models on small datasets without overfitting. In addition, CASHformer models cognitive decline to reveal AD atrophy patterns in the temporal sequence. Our results show that CASHformer reduces the reconstruction error by [Math Processing Error] compared to previously proposed methods. Moreover, the accuracy of detecting patients progressing to AD increases by [Math Processing Error] with imputing missing longitudinal shape data.
Dokumententyp: | Konferenzbeitrag (Paper) |
---|---|
Fakultät: | Medizin |
Themengebiete: | 600 Technik, Medizin, angewandte Wissenschaften > 610 Medizin und Gesundheit |
ISSN: | 0302-9743 |
Ort: | Cham, Switzerland |
Bemerkung: | ISBN 978-3-031-16430-9 |
Sprache: | Englisch |
Dokumenten ID: | 110010 |
Datum der Veröffentlichung auf Open Access LMU: | 22. Mrz. 2024, 09:54 |
Letzte Änderungen: | 22. Mrz. 2024, 10:08 |