4.6 Article

DenseU-Net-Based Semantic Segmentation of Objects in Urban Remote Sensing Images

期刊

IEEE ACCESS
卷 7, 期 -, 页码 65347-65356

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2019.2917952

关键词

Class imbalance; deep convolutional neural networks; median frequency balancing; semantic segmentation; urban remote sensing images

资金

  1. National Natural Science Foundation of China [61762024]
  2. Natural Science Foundation of Guangxi Province [2017GXNSFDA198050, 2016GXNSFAA380054]

向作者/读者索取更多资源

Class imbalance is a serious problem that plagues the semantic segmentation task in urban remote sensing images. Since large object classes dominate the segmentation task, small object classes are usually suppressed, so the solutions based on optimizing the overall accuracy are often unsatisfactory. In the light of the class imbalance of the semantic segmentation in urban remote sensing images, we developed the concept of the Down-sampling Block (DownBlock) for obtaining context information and the Up-sampling Block (UpBlock) for restoring the original resolution. We proposed an end-to-end deep convolutional neural network (DenseU-Net) architecture for pixel-wise urban remote sensing image segmentation. The main idea of the DenseU-Net is to connect convolutional neural network features through cascade operations and use its symmetrical structure to fuse the detail features in shallow layers and the abstract semantic features in deep layers. A focal loss function weighted by the median frequency balancing (MFB_Focal(loss)) is proposed; the accuracy of the small object classes and the overall accuracy are improved effectively with our approach. Our experiments were based on the 2016 ISPRS Vaihingen 2D semantic labeling dataset and demonstrated the following outcomes. In the case where boundary pixels were considered (GT), MFB_Focal(loss) achieved a good overall segmentation performance using the same U-Net model, and the F1-score of the small object class car was improved by 9.28% compared with the cross-entropy loss function. Using the same MFB_Focal(loss) loss function, the overall accuracy of the DenseU-Net was better than that of U-Net, where the F1-score of the car class was 6.71% higher. Finally, without any post-processing, the DenseU-Net+MFB_Focal(loss) achieved the overall accuracy of 85.63%, and the F1-score of the car class was 83.23%, which is superior to HSN+OI+WBP both numerically and visually.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据