4.7 Article

A Survey on 3D Object Detection Methods for Autonomous Driving Applications

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TITS.2019.2892405

Keywords

Machine learning; deep learning; computer vision; object detection; autonomous vehicles; intelligent vehicles

Funding

  1. Jaguar Land Rover
  2. EPSRC U.K. through the Towards Autonomy: Smart and Connected Control Program [EP/N01300X/1]
  3. EPSRC [EP/N01300X/2, EP/N01300X/1] Funding Source: UKRI

Ask authors/readers for more resources

An autonomous vehicle (AV) requires an accurate perception of its surrounding environment to operate reliably. The perception system of an AV, which normally employs machine learning (e.g., deep learning), transforms sensory data into semantic information that enables autonomous driving. Object detection is a fundamental function of this perception system, which has been tackled by several works, most of them using 2D detection methods. However, the 2D methods do not provide depth information, which is required for driving tasks, such as path planning, collision avoidance, and so on. Alternatively, the 3D object detection methods introduce a third dimension that reveals more detailed object's size and location information. Nonetheless, the detection accuracy of such methods needs to be improved. To the best of our knowledge, this is the first survey on 3D object detection methods used for autonomous driving applications. This paper presents an overview of 3D object detection methods and prevalently used sensors and datasets in AVs. It then discusses and categorizes the recent works based on sensors modalities into monocular, point cloudbased, and fusion methods. We then summarize the results of the surveyed works and identify the research gaps and future research directions.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available