3.8 Proceedings Paper

Monocular Visual-Inertial State Estimation for Mobile Augmented Reality

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/ISMAR.2017.18

Keywords

I.4.8 [Image Processing and Computer Vision]: Scene Analysis-Tracking; H.5.1 [Information Interfacesand Presentation]: Multimedia Information Systems-Artificial, augmented, and virtual realities

Funding

  1. Hong Kong Research Grants Council, Early Career Scheme [26201616]

Ask authors/readers for more resources

Mobile phones equipped with a monocular camera and an inertial measurement unit (IMU) are ideal platforms for augmented reality (AR) applications, but the lack of direct metric distance measurement and the existence of aggressive motions pose significant challenges on the localization of the AR device. In this work, we propose a tightly-coupled, optimization-based, monocular visual-inertial state estimation for robust camera localization in complex indoor and outdoor environments. Our approach does not require any artificial markers, and is able to recover the metric scale using the monocular camera setup. The whole system is capable of online initialization without relying on any assumptions about the environment. Our tightly-coupled formulation makes it naturally robust to aggressive motions. We develop a lightweight loop closure module that is tightly integrated with the state estimator to eliminate drift. The performance of our proposed method is demonstrated via comparison against state-of-the-art visual-inertial state estimators on public datasets and real-time AR applications on mobile devices. We release our implementation on mobile devices as open source software (1).

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available