4.7 Article

Seeing Through Darkness: Visual Localization at Night via Weakly Supervised Learning of Domain Invariant Features

期刊

IEEE TRANSACTIONS ON MULTIMEDIA
卷 25, 期 -, 页码 1713-1726

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMM.2022.3154165

关键词

Domain invariant local features; image matching; long-term visual localization; weakly supervised learning

向作者/读者索取更多资源

This paper proposes an adversarial learning based solution to extract robust local features and descriptions across day-night images. By training a discriminator to distinguish day and night images and adjusting the feature extraction network to fool the discriminator, the network can extract domain invariant keypoints and descriptors. Compared to existing methods, this approach only requires additional easily captured night images to improve the domain invariance of learned features.
Long term visual localization has to conquer the problem of matching images with dramatic photometric changes caused by different seasons, natural and man-made illumination changes, etc. Visual localization at night plays a vital role in many applications like autonomous driving and augmented reality, for which extracting keypoints and descriptors with robustness to day-night illumination changes has became the bottleneck. This paper proposes an adversarial learning based solution to harvest from the weakly domain labels of day and night images, along with the point level correspondences among day time images, to achieve robust local feature extraction and description across day-night images. The key idea is to learn a discriminator to distinguish whether a feature map is generated from the day or night images, and simultaneously to adjust the parameters of feature extraction network so as to fool the discriminator. After adversarial training of the discriminator and feature extraction network, the feature extraction network finally reaches a stable status so that the extracted feature maps are robust to day-night photometric changes, based on which day-night domain invariant keypoints and descriptors can be extracted. Compared to existing local feature learning methods, it only requires an additional set of easily captured night images to improve the domain invariance of learned features. Experiments on two challenging benchmarks show the effectiveness of proposed method. In addition, this paper revisits the widely used image matching metrics on HPatches and finds that recall of different methods is highly related to their relative localization performance.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据