4.7 Article

Learning Selective Sensor Fusion for State Estimation

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2022.3176677

关键词

Feature extraction; Sensor fusion; Robot sensing systems; Visualization; Location awareness; Laser radar; Task analysis; Deep neural networks (DNNs); feature selection; localization; multimodal learning; point cloud odometry; robot navigation; sensor fusion; visual-inertial odometry (VIO)

资金

  1. Engineering and Physical Sciences Research Council (EPSRC) [EP/S030832/1]
  2. NFSC [62103427, 62073331]

向作者/读者索取更多资源

The paper introduces SelectFusion, an end-to-end selective sensor fusion module that can be applied to useful pairs of sensor modalities. The model is able to evaluate the reliability of latent features from different sensor modalities and estimate trajectory at both scale and global pose. Extensive evaluations on multiple datasets have been conducted to investigate the effectiveness of different fusion strategies in selecting the most reliable features.
Autonomous vehicles and mobile robotic systems are typically equipped with multiple sensors to provide redundancy. By integrating the observations from different sensors, these mobile agents are able to perceive the environment and estimate system states, e.g., locations and orientations. Although deep learning (DL) approaches for multimodal odometry estimation and localization have gained traction, they rarely focus on the issue of robust sensor fusion--a necessary consideration to deal with noisy or incomplete sensor observations in the real world. Moreover, current deep odometry models suffer from a lack of interpretability. To this extent, we propose SelectFusion, an end-to-end selective sensor fusion module that can be applied to useful pairs of sensor modalities, such as monocular images and inertial measurements, depth images, and light detection and ranging (LIDAR) point clouds. Our model is a uniform framework that is not restricted to specific modality or task. During prediction, the network is able to assess the reliability of the latent features from different sensor modalities and to estimate trajectory at both scale and global pose. In particular, we propose two fusion modules--a deterministic soft fusion and a stochastic hard fusion--and offer a comprehensive study of the new strategies compared with trivial direct fusion. We extensively evaluate all fusion strategies both on public datasets and on progressively degraded datasets that present synthetic occlusions, noisy and missing data, and time misalignment between sensors, and we investigate the effectiveness of the different fusion strategies in attending the most reliable features, which in itself provides insights into the operation of the various models.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据