4.7 Article

An Accurate, Robust Visual Odometry and Detail-Preserving Reconstruction System

期刊

IEEE TRANSACTIONS ON MULTIMEDIA
卷 23, 期 -, 页码 2820-2832

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMM.2020.3017886

关键词

Image reconstruction; Estimation; Cameras; Tracking; Visual odometry; Simultaneous localization and mapping; Brightness; Deep convolutional neural networks; monocular vision simultaneous localization and mapping; visual odometry

资金

  1. National Natural Science Foundation of China [61772267]
  2. Fundamental Research Funds for the Central Universities [NE2014402, NE2016004]
  3. Nanjing University of Aeronautics and Astronautics Fundamental Research Funds [NS2015053]

向作者/读者索取更多资源

The paper introduces a novel approach for accurate and robust ego-motion estimation and reconstruction in indoor environments, achieved through creating event-based difference images and inferring camera ego-motion using a deep convolutional neural network, as well as reducing mismatch in depth estimation stage with an event region search algorithm.
Tracking and mapping functions in a monocular SLAM system remain active due to their challenging nature. In this paper, we propose a novel approach to perform the accurate and robust ego-motion estimation and provide the detail-preserving reconstruction in indoor environments. More specifically, we design a new algorithm called synchronous event measurement (SEM) to create event-based difference images (EDIs) so as to highlight frame-to-frame (F2F) difference. The observation indicates that F2F difference is highly correlated with the camera's motion change. We hereby feed EDIs into a deep convolutional neural network, in order to infer ego-motion of the camera. Subsequently, based on a monocular reconstruction framework (REMODE), we devise an algorithm named event region search or briefly ERS, to reduce possibility of mismatch on the depth estimation stage. Evaluations on a variety of datasets demonstrate the satisfactory performance of our proposed method: the ego-motion estimation is more accurate than some geometric based Visual Odometry (VO) and learning based approaches. The results are robust under extreme situations, such as brightness variation and motion blur. Meanwhile, our approach can provide more precise depth map with relatively rich textural information.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据