Journal
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Volume 43, Issue 4, Pages 1293-1307Publisher
IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2019.2952114
Keywords
Three-dimensional displays; Visualization; Cameras; Pose estimation; Buildings; Databases; Feature extraction; Visual localization; place recognition; pose estimation; image retrieval; feature matching; view synthesis
Funding
- JSPS KAKENHI [15H05313, 17H00744, 17J05908]
- EU [731970]
- ERC [336845]
- CIFAR Learning in Machines Brains program
- EU Structural and Investment Funds, Operational Programe Research, Development and Education under the project IMPACT [CZ.02.1.01/0.0/0.0/15_003/0000468]
- NVIDIA Corporation
- Grants-in-Aid for Scientific Research [17J05908, 15H05313] Funding Source: KAKEN
Ask authors/readers for more resources
This research aims to predict the pose of indoor photographs in a large 3D map. Contributions include a new visual localization method, a dataset for indoor localization, and significant performance improvements on challenging new data.
We seek to predict the 6 degree-of-freedom (6DoF) pose of a query photograph with respect to a large indoor 3D map. The contributions of this work are three-fold. First, we develop a new large-scale visual localization method targeted for indoor spaces. The method proceeds along three steps: (i) efficient retrieval of candidate poses that scales to large-scale environments, (ii) pose estimation using dense matching rather than sparse local features to deal with weakly textured indoor scenes, and (iii) pose verification by virtual view synthesis that is robust to significant changes in viewpoint, scene layout, and occlusion. Second, we release a new dataset with reference 6DoF poses for large-scale indoor localization. Query photographs are captured by mobile phones at a different time than the reference 3D map, thus presenting a realistic indoor localization scenario. Third, we demonstrate that our method significantly outperforms current state-of-the-art indoor localization approaches on this new challenging data. Code and data are publicly available.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available