3.8 Proceedings Paper

BoxNet: A Deep Learning Method for 2D Bounding Box Estimation from Bird's-Eye View Point Cloud

Journal

Publisher

IEEE
DOI: 10.1109/ivs.2019.8814058

Keywords

-

Ask authors/readers for more resources

We present a learning-based method to estimate the object hounding box from its 21) bird's-eye view (REV) LiDAR points. Our method, entitled BoxNet, exploits a simple deep neural network that can efficiently handle unordered points. The method takes as input the 21) coordinates of all the points and the output is a vector consisting of both the box pose (position and orientation in LiDAR coordinate system) and its size (width and length). In order to deal with the angle discontinuity problem, we propose to estimate the double-angle sinusoidal values rather than the angle itself. We also predict the center relative to the point cloud mean to boost the performance of estimating the location of the box. The proposed method does not rely on the ordering of points as in many existing approaches, and can accurately predict the actual size of the bounding box based on the prior information that is obtained from the training data. BoxNet is validated using the KITTI 3D object dataset. with significant improvement compared with the state-of-the-art non-learning based methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available