4.6 Article

Pixel and feature level based domain adaptation for object detection in autonomous driving

Journal

NEUROCOMPUTING
Volume 367, Issue -, Pages 31-38

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2019.08.022

Keywords

Autonomous driving; Convolutional neural network; Generative adversarial network; Object detection; Unsupervised domain adaptation

Funding

  1. National Research Foundation
  2. Keppel-NUS Corporate Laboratory [R-261-507-019-281]
  3. Keppel Corporation
  4. National University of Singapore

Ask authors/readers for more resources

Annotating large-scale datasets to train modern convolutional neural networks is prohibitively expensive and time-consuming for many real tasks. One alternative is to train the model on labeled synthetic datasets and apply it in the real scenes. However, this straightforward method often fails to generalize well mainly due to the domain bias between the synthetic and real datasets. Many unsupervised domain adaptation (UDA) methods were introduced to address this problem but most of them only focused on the simple classification task. This paper presents a novel UDA model which integrates both image and feature level based adaptations to solve the cross-domain object detection problem. We employ objectives of the generative adversarial network and the cycle consistency loss for image translation. Furthermore, region proposal based feature adversarial training and classification are proposed to further minimize the domain shifts and preserve the semantics of the target objects. Extensive experiments are conducted on several different adaptation scenarios, and the results demonstrate the robustness and superiority of the proposed method. (C) 2019 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available