4.7 Article

MIND: Modality independent neighbourhood descriptor for multi-modal deformable registration

期刊

MEDICAL IMAGE ANALYSIS
卷 16, 期 7, 页码 1423-1435

出版社

ELSEVIER
DOI: 10.1016/j.media.2012.05.008

关键词

Non-rigid registration; Multi-modal similarity metric; Self-similarity; Non-local means; Pulmonary images

资金

  1. EPSRC [EP/H050892/1]
  2. Cancer Research UK within the Oxford Cancer Imaging Centre
  3. Engineering and Physical Sciences Research Council [EP/H050892/1] Funding Source: researchfish
  4. EPSRC [EP/H050892/1] Funding Source: UKRI

向作者/读者索取更多资源

Deformable registration of images obtained from different modalities remains a challenging task in medical image analysis. This paper addresses this important problem and proposes a modality independent neighbourhood descriptor (MIND) for both linear and deformable multi-modal registration. Based on the similarity of small image patches within one image, it aims to extract the distinctive structure in a local neighbourhood, which is preserved across modalities. The descriptor is based on the concept of image self-similarity, which has been introduced for non-local means filtering for image denoising. It is able to distinguish between different types of features such as corners, edges and homogeneously textured regions. MIND is robust to the most considerable differences between modalities: non-functional intensity relations, image noise and non-uniform bias fields. The multi-dimensional descriptor can be efficiently computed in a dense fashion across the whole image and provides point-wise local similarity across modalities based on the absolute or squared difference between descriptors, making it applicable for a wide range of transformation models and optimisation algorithms. We use the sum of squared differences of the MIND representations of the images as a similarity metric within a symmetric non-parametric Gauss-Newton registration framework. In principle, MIND would be applicable to the registration of arbitrary modalities. In this work, we apply and validate it for the registration of clinical 3D thoracic CT scans between inhale and exhale as well as the alignment of 3D CT and MRI scans. Experimental results show the advantages of MIND over state-of-the-art techniques such as conditional mutual information and entropy images, with respect to clinically annotated landmark locations. (c) 2012 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据