4.6 Article

Vision-Only Robot Navigation in a Neural Radiance World

Journal

IEEE ROBOTICS AND AUTOMATION LETTERS
Volume 7, Issue 2, Pages 4606-4613

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2022.3150497

Keywords

Collision avoidance; localization; motion and path planning; vision-based navigation; neural radiance fields

Categories

Funding

  1. NASA Space Technology Research Fellowship [NSSC18K1180]
  2. NSF NRI [1830402]
  3. ONR [N00014-18-1-2830]
  4. Siemens
  5. Stanford Data Science Initiative
  6. Div Of Information & Intelligent Systems
  7. Direct For Computer & Info Scie & Enginr [1830402] Funding Source: National Science Foundation

Ask authors/readers for more resources

The article introduces the use of NeRF for representing 3D scenes, and proposes an algorithm for robot navigation using an RGB camera, including trajectory optimization and pose estimation methods. By combining the trajectory planner with the pose filter in an online replanning loop, a vision-based robot navigation pipeline is established.
Neural Radiance Fields (NeRFs) have recently emerged as a powerful paradigm for the representation of natural, complex 3D scenes. Neural Radiance Fields (NeRFs) represent continuous volumetric density and RGB values in a neural network, and generate photo-realistic images from unseen camera viewpoints through ray tracing. We propose an algorithm for navigating a robot through a 3D environment represented as a NeRF using only an onboard RGB camera for localization. We assume the NeRF for the scene has been pre-trained offline, and the robot's objective is to navigate through unoccupied space in the NeRF to reach a goal pose. We introduce a trajectory optimization algorithm that avoids collisions with high-density regions in the NeRF based on a discrete time version of differential flatness that is amenable to constraining the robot's full pose and control inputs. We also introduce an optimization based filtering method to estimate 6DoF pose and velocities for the robot in the NeRF given only an onboard RGB camera. We combine the trajectory planner with the pose filter in an online replanning loop to give a vision-based robot navigation pipeline. We present simulation results with a quadrotor robot navigating through a jungle gym environment, the inside of a church, and Stonehenge using only an RGB camera. We also demonstrate an omnidirectional ground robot navigating through the church, requiring it to reorient to fit through a narrow gap.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available