4.7 Article

Cross-modal attention for multi-modal image registration

Journal

MEDICAL IMAGE ANALYSIS
Volume 82, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.media.2022.102612

Keywords

Multi-modal registration; Deep learning; Cross-modal attention; Prostate caner imaging

Funding

  1. National Institute of Biomedical Imaging and Bioengineering (NIBIB) of the National Institutes of Health (NIH) , USA [R21EB028001, R01EB027898]
  2. National Cancer Institute

Ask authors/readers for more resources

This paper introduces a novel cross-modal attention mechanism for medical image registration, along with a contrastive learning-based pre-training method to improve network performance.
In the past few years, convolutional neural networks (CNNs) have been proven powerful in extracting image features crucial for medical image registration. However, challenging applications and recent advances in computer vision suggest that CNNs are limited in their ability to understand the spatial correspondence between features, which is at the core of image registration. The issue is further exaggerated when it comes to multi-modal image registration, where the appearances of input images can differ significantly. This paper presents a novel cross-modal attention mechanism for correlating features extracted from the multi-modal input images and mapping such correlation to image registration transformation. To efficiently train the developed network, a contrastive learning-based pre-training method is also proposed to aid the network in extracting high-level features across the input modalities for the following cross-modal attention learning. We validated the proposed method on transrectal ultrasound (TRUS) to magnetic resonance (MR) registration, a clinically important procedure that benefits prostate cancer biopsy. Our experimental results demonstrate that for MR-TRUS registration, a deep neural network embedded with the cross-modal attention block outperforms other advanced CNN-based networks with ten times its size. We also incorporated visualization techniques to improve the interpretability of our network, which helps bring insights into the deep learning based image registration methods. The source code of our work is available at https://github.com/DIAL-RPI/Attention-Reg.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available