4.7 Article

Multi-scale Adaptive Feature Fusion Network for Semantic Segmentation in Remote Sensing Images

Journal

REMOTE SENSING
Volume 12, Issue 5, Pages -

Publisher

MDPI
DOI: 10.3390/rs12050872

Keywords

multi-scale context; adaptive fusion; remote sensing image; semantic segmentation; CNN; deep learning

Funding

  1. National Natural Science Foundation of China [61773304, 61836009, 61871306, 61772399, U1701267]
  2. Fund for Foreign Scholars in University Research and Teaching Programs (the 111 Project) [B07048]
  3. Program for Cheung Kong Scholars and Innovative Research Team in University [IRT1170]
  4. EPSRC [EP/P017487/1] Funding Source: UKRI

Ask authors/readers for more resources

Semantic segmentation of high-resolution remote sensing images is highly challenging due to the presence of a complicated background, irregular target shapes, and similarities in the appearance of multiple target categories. Most of the existing segmentation methods that rely only on simple fusion of the extracted multi-scale features often fail to provide satisfactory results when there is a large difference in the target sizes. Handling this problem through multi-scale context extraction and efficient fusion of multi-scale features, in this paper we present an end-to-end multi-scale adaptive feature fusion network (MANet) for semantic segmentation in remote sensing images. It is a coding and decoding structure that includes a multi-scale context extraction module (MCM) and an adaptive fusion module (AFM). The MCM employs two layers of atrous convolutions with different dilatation rates and global average pooling to extract context information at multiple scales in parallel. MANet embeds the channel attention mechanism to fuse semantic features. The high- and low-level semantic information are concatenated to generate global features via global average pooling. These global features are used as channel weights to acquire adaptive weight information of each channel by the fully connected layer. To accomplish an efficient fusion, these tuned weights are applied to the fused features. Performance of the proposed method has been evaluated by comparing it with six other state-of-the-art networks: fully convolutional networks (FCN), U-net, UZ1, Light-weight RefineNet, DeepLabv3+, and APPD. Experiments performed using the publicly available Potsdam and Vaihingen datasets show that the proposed MANet significantly outperforms the other existing networks, with overall accuracy reaching 89.4% and 88.2%, respectively and with average of F1 reaching 90.4% and 86.7% respectively.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available