4.7 Article

Sentinel SAR-optical fusion for crop type mapping using deep learning and Google Earth Engine

期刊

出版社

ELSEVIER
DOI: 10.1016/j.isprsjprs.2021.02.018

关键词

3D U-Net; Denoising neural networks; Sentinel-1; Sentinel-2; Data fusion

资金

  1. National Science Foundation [IIA-1355406, IIA-1430427]
  2. National Aeronautics and Space Administration [NNX15AK03H]

向作者/读者索取更多资源

Accurate crop type mapping is crucial for understanding food systems and yield prediction. Recent advances in big data, high-resolution imagery, and cloud-based analytics have enabled scientists to improve crop type mapping algorithms using remote sensing, computer vision, and machine learning. Research shows that deep learning techniques, particularly when fusing multi-temporal SAR and optical data, outperform traditional methods for crop type mapping.
Accurate crop type mapping provides numerous benefits for a deeper understanding of food systems and yield prediction. Ever-increasing big data, easy access to high-resolution imagery, and cloud-based analytics platforms like Google Earth Engine have drastically improved the ability for scientists to advance data-driven agriculture with improved algorithms for crop type mapping using remote sensing, computer vision, and machine learning. Crop type mapping techniques mainly relied on standalone SAR and optical imagery, few studies investigated the potential of SAR-optical data fusion, coupled with virtual constellation, and 3-dimensional (3D) deep learning networks. To this extent, we use a deep learning approach that utilizes the denoised backscatter and texture information from multi-temporal Sentinel-1 SAR data and the spectral information from multi-temporal optical Sentinel-2 data for mapping ten different crop types, as well as water, soil and urban area. Multi-temporal Sentinel-1 data was fused with multi-temporal optical Sentinel-2 data in an effort to improve classification accuracies for crop types. We compared the results of the 3D U-Net to the state-of-the-art deep learning networks, including SegNet and 2D U-Net, as well as commonly used machine learning method such as Random Forest. The results showed (1) fusing multi-temporal SAR and optical data yields higher training overall accuracies (OA) (3D U-Net 0.992, 2D U-Net 0.943, SegNet 0.871) and testing OA (3D U-Net 0.941, 2D U-Net 0.847, SegNet 0.643) for crop type mapping compared to standalone multi-temporal SAR or optical data (2) optical data fused with denoised SAR data via a denoising convolution neural network (OA 0.912) performed better for crop type mapping compared to optical data fused with boxcar (OA 0.880), Lee (OA 0.881), and median (OA 0.887) filtered SAR data and (3) 3D convolutional neural networks perform better than 2D convolutional neural networks for crop type mapping (SAR OA 0.912, optical OA 0.937, fused OA 0.992).

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据