3.8 Proceedings Paper

EV-IMO: Motion Segmentation Dataset and Learning Pipeline for Event Cameras

Publisher

IEEE
DOI: 10.1109/iros40897.2019.8968520

Keywords

-

Funding

  1. Northrop Grumman Mission Systems University Research Program
  2. ONR [N00014-17-1-2622]
  3. National Science Foundation [1824198]
  4. Division Of Behavioral and Cognitive Sci
  5. Direct For Social, Behav & Economic Scie [1824198] Funding Source: National Science Foundation

Ask authors/readers for more resources

We present the first event-based learning approach for motion segmentation in indoor scenes and the first event-based dataset - EV-IMO - which includes accurate pixelwise motion masks, egomotion and ground truth depth. Our approach is based on an efficient implementation of the SfM learning pipeline using a low parameter neural network architecture on event data. In addition to camera egomotion and a dense depth map, the network estimates independently moving object segmentation at the pixel-level and computes per-object 3D translational velocities of moving objects. We also train a shallow network with just 40k parameters, which is able to compute depth and egomotion. Our EV-IMO dataset features 32 minutes of indoor recording with up to 3 fast moving objects in the camera field of view. The objects and the camera are tracked using a VICON (R) motion capture system. By 3D scanning the room and the objects, ground truth of the depth map and pixel-wise object masks are obtained. We then train and evaluate our learning pipeline on EV-IMO and demonstrate that it is well suited for scene constrained robotics applications.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available