4.7 Article

Combining LiDAR Metrics and Sentinel-2 Imagery to Estimate Basal Area and Wood Volume in Complex Forest Environment via Neural Networks

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSTARS.2022.3175609

关键词

Forestry; Remote sensing; Laser radar; Optical sensors; Optical imaging; Estimation; Vegetation; Convolutional neural networks (CNNs); forest monitoring; multiscale remote sensing; multisensor data fusion; structure and biophysical variables estimation

资金

  1. French Agency for Ecological Transition (ADEME) through the PROTEST project [1703C0069]
  2. French Region of Occitanie
  3. K. Dayal's Ph.D. scholarship
  4. GRAINE program

向作者/读者索取更多资源

Forest ecosystems play a crucial role in global carbon storage and climate mechanisms. Utilizing earth observation data, this study proposes a deep learning-based fusion strategy to combine airborne laser scanning and high-resolution optical imagery for forest characterization. The results highlight the importance of effectively combining multimodal data for improved performance.
Forest ecosystems play a fundamental role in natural balances and climate mechanisms through their contribution to global carbon storage. Their sustainable management and conservation is crucial in the current context of global warming and biodiversity conservation. To tackle such challenges, earth observation data have been identified as a valuable source of information. While earth observation data constitute an unprecedented opportunity to monitor forest ecosystems, its effective exploitation still poses serious challenges since multimodal information needs to be combined to describe complex natural phenomena. To deal with this particular issue in the context of structure and biophysical variables estimation for forest characterization, we propose a new deep learning-based fusion strategy to combine together high density three-dimensional (3-D) point clouds acquired by airborne laser scanning with high-resolution optical imagery. In order to manage and fully exploit the available multimodal information, we implement a two-branch late fusion deep learning architecture taking advantage of the specificity of each modality. On the one hand, a 2-D CNN branch is devoted to the analysis of Sentinel-2 time series data, and on the other hand, a multilayer perceptron branch is dedicated to the processing of LiDAR-derived information. The performance of our framework is evaluated on two forest variables of interest: total volume and basal area at stand level. The obtained results underline that the availability of multimodal remote sensing data is not a direct synonym of performance improvements but, the way in which they are combined together is of paramount importance.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据