4.7 Article

DeepFuseNet of Omnidirectional Far-Infrared and Visual Stream for Vegetation Detection

Journal

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
Volume 59, Issue 11, Pages 9057-9070

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2020.3044487

Keywords

Visualization; Feature extraction; Robots; Sensors; Vegetation mapping; Cameras; Sensor fusion; Convolutional neural network (CNN); deep learning (DL); object recognition; omnidirectional (O-D) far-infrared (FIR) and visual fusion; semantic extraction; vegetation detection

Funding

  1. U.S. Navy
  2. Office of Naval Research
  3. Naval Surface Warfare Center Dahlgren

Ask authors/readers for more resources

This study investigates the application of deep learning in the fusion of omnidirectional infrared and visual sensors to enhance the intelligent perception of autonomous robotic systems. By introducing novel fusion methods, deep learning can better handle the extraction of vegetation materials, reduce false detection rates, and demonstrate efficiency in experiments.
This article investigates the application of deep learning (DL) to the fusion of omnidirectional (O-D) infrared (IR) sensors and O-D visual sensors to improve the intelligent perception of autonomous robotic systems. Recent techniques primarily focus on O-D and conventional visual sensors for applications in localization, mapping, and tracking. The robotic vision systems have not sufficiently utilized the combination of O-D IR and O-D visual sensors, coupled with DL, for the extraction of vegetation material. We will be showing the contradiction between current approaches and our deep vegetation learning sensor fusion. This article introduces two architectures: 1) the application of two autoencoders feeding into a four-layer convolutional neural network (CNN) and 2) two deep CNN feature extractors feeding a deep CNN fusion network (DeepFuseNet) for the fusion of O-D IR and O-D visual sensors to better address the number of false detects inherent in indices-based spectral decomposition. We compare our DL results to our previous work with normalized difference vegetation index (NDVI) and IR region-based spectral fusion, and to traditional machine learning approaches. This work proves that the fusion of the O-D IR and O-D visual streams utilizing our DeepFuseNet DL approach outperforms both the previous NVDI fused with far-IR region segmentation and traditional machine learning approaches. Experimental results of our method validate a 92% reduction in false detects compared to traditional indices-based detection. This article contributes a novel method for the fusion of O-D visual and O-D IR sensors using two CNN feature extractors feeding into a deep CNN (DeepFuseNet).

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available