4.6 Article

Online Adaptation for Implicit Object Tracking and Shape Reconstruction in the Wild

Journal

IEEE ROBOTICS AND AUTOMATION LETTERS
Volume 7, Issue 4, Pages 8909-8916

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2022.3189185

Keywords

Deep learning for visual perception; visual tracking

Categories

Ask authors/readers for more resources

This paper proposes a novel framework that utilizes video data to simultaneously track and reconstruct 3D objects using a neural implicit function. By iteratively improving the shape reconstruction and tracking performance, significant improvements are shown on two datasets.
Tracking and reconstructing 3D objects from cluttered scenes are the key components for computer vision, robotics and autonomous driving systems. While recent progress in implicit function has shown encouraging results on high-quality 3D shape reconstruction, it is still very challenging to generalize to cluttered and partially observable LiDAR data. In this paper, we propose to leverage the continuity in video data. We introduce a novel and unified framework which utilizes a neural implicit function to simultaneously track and reconstruct 3D objects in the wild. Our approach adapts the DeepSDF model (i.e., an instantiation of the implicit function) in the video online, iteratively improving the shape reconstruction while in return improving the tracking, and vice versa. We experiment with both Waymo and KITTI datasets and show significant improvements over state-of-the-art methods for both tracking and shape reconstruction tasks. Our project page is at https://jianglongye.com/implicit-tracking.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available