Abstract
In this paper, we address the multimodal registration problem from a novel perspective, aiming to predict the transformation aligning images directly from their visual appearance. We formulate the prediction as a supervised regression task, with joint image descriptors as input and the output are the parameters of the transformation that guide the moving image towards alignment. We model the joint local appearance with context aware descriptors that capture both local and global cues simultaneously in the two modalities, while the regression function is based on the gradient boosted trees method capable of handling the very large contextual feature space. The good properties of our predictions allow us to couple them with a simple gradient-based optimization for the final registration. Our approach can be applied to any transformation parametrization as well as a broad range of modality pairs. Our method learns the relationship between the intensity distributions of a pair of modalities by using prior knowledge in the form of a small training set of aligned image pairs (in the order of 1-5 in our experiments). We demonstrate the flexibility and generality of our method by evaluating its performance on a variety of multimodal imaging pairs obtained from two publicly available datasets, RIRE (brain MR, CT and PET) and IXI (brain MR). We also show results for the very challenging deformable registration of Intravascular Ultrasound and Histology images. In these experiments, our approach has a larger capture range when compared to other state-of-the-art methods, while improving registration accuracy in complex cases.
Dokumententyp: | Zeitschriftenartikel |
---|---|
Fakultät: | Psychologie und Pädagogik > Department Psychologie |
Themengebiete: | 100 Philosophie und Psychologie > 150 Psychologie |
ISSN: | 1361-8415 |
Sprache: | Englisch |
Dokumenten ID: | 53247 |
Datum der Veröffentlichung auf Open Access LMU: | 14. Jun. 2018, 09:52 |
Letzte Änderungen: | 04. Nov. 2020, 13:32 |