4.7 Article

Comparative Analysis of Pixel-Level Fusion Algorithms and a New High-Resolution Dataset for SAR and Optical Image Fusion

期刊

REMOTE SENSING
卷 15, 期 23, 页码 -

出版社

MDPI
DOI: 10.3390/rs15235514

关键词

synthetic aperture radar (SAR); optical image; image fusion; image classification

向作者/读者索取更多资源

Synthetic aperture radar (SAR) and optical image fusion can effectively integrate complementary information and improve the performance of remote sensing applications. This paper conducts a systematic review and comparative analysis of pixel-level fusion algorithms for SAR and optical image fusion. Eleven representative fusion methods are evaluated using high-resolution datasets based on visual evaluation, objective image quality metrics, and classification accuracy. The results show that multiscale decomposition methods (MSD) can effectively avoid the negative effects of SAR image shadows, and the non-subsampled contourlet transform method (NSCT) presents the best fusion results. In terms of image classification, the gradient transfer fusion method (GTF) yields the best results.
Synthetic aperture radar (SAR) and optical images often present different geometric structures and texture features for the same ground object. Through the fusion of SAR and optical images, it can effectively integrate their complementary information, thus better meeting the requirements of remote sensing applications, such as target recognition, classification, and change detection, so as to realize the collaborative utilization of multi-modal images. In order to select appropriate methods to achieve high-quality fusion of SAR and optical images, this paper conducts a systematic review of current pixel-level fusion algorithms for SAR and optical image fusion. Subsequently, eleven representative fusion methods, including component substitution methods (CS), multiscale decomposition methods (MSD), and model-based methods, are chosen for a comparative analysis. In the experiment, we produce a high-resolution SAR and optical image fusion dataset (named YYX-OPT-SAR) covering three different types of scenes, including urban, suburban, and mountain. This dataset and a publicly available medium-resolution dataset are used to evaluate these fusion methods based on three different kinds of evaluation criteria: visual evaluation, objective image quality metrics, and classification accuracy. In terms of the evaluation using image quality metrics, the experimental results show that MSD methods can effectively avoid the negative effects of SAR image shadows on the corresponding area of the fusion result compared with CS methods, while model-based methods exhibit relatively poor performance. Among all of the fusion methods involved in the comparison, the non-subsampled contourlet transform method (NSCT) presents the best fusion results. In the evaluation using image classification, most experimental results show that the overall classification accuracy after fusion is better than that before fusion. This indicates that optical-SAR fusion can improve land classification, with the gradient transfer fusion method (GTF) yielding the best classification results among all of these fusion methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据