4.6 Article

Learned Camera Gain and Exposure Control for Improved Visual Feature Detection and Matching

期刊

IEEE ROBOTICS AND AUTOMATION LETTERS
卷 6, 期 2, 页码 2028-2035

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2021.3058909

关键词

Deep learning for visual perception; vision-based navigation; visual learning

类别

资金

  1. Canada Research Chairs program

向作者/读者索取更多资源

Successful visual navigation relies on capturing images with sufficient information, and this letter discusses a data-driven approach to adjust camera parameters for better image quality in visual navigation tasks. The trained neural network model can predictively adjust camera settings to maximize matchable features in consecutive images, outperforming competing algorithms in compensating for lighting changes.
Successful visual navigation depends upon capturing images that contain sufficient useful information. In this letter, we explore a data-driven approach to account for environmental lighting changes, improving the quality of images for use in visual odometry (VO) or visual simultaneous localization and mapping (SLAM). We train a deep convolutional neural network model to predictively adjust camera gain and exposure time parameters such that consecutive images contain a maximal number of matchable features. The training process is fully self-supervised: our training signal is derived from an underlying VO or SLAM pipeline and, as a result, the model is optimized to perform well with that specific pipeline. We demonstrate through extensive real-world experiments that our network can anticipate and compensate for dramatic lighting changes (e.g., transitions into and out of road tunnels), maintaining a substantially higher number of inlier feature matches than competing camera parameter control algorithms.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据