4.7 Article

NAS-FCOS: Efficient Search for Object Detection Architectures

Journal

INTERNATIONAL JOURNAL OF COMPUTER VISION
Volume 129, Issue 12, Pages 3299-3312

Publisher

SPRINGER
DOI: 10.1007/s11263-021-01523-2

Keywords

Neural architecture search; Object detection; Reinforcement learning; Deep learning

Funding

  1. National Key R&D Program of China [2020AAA0106900]
  2. National Natural Science Foundation of China [U19B2037, 61876152]

Ask authors/readers for more resources

Neural Architecture Search (NAS) has the potential to reduce manual effort in network design by automatically discovering optimal architectures, yet object detection has been less explored in NAS research; here, we propose an efficient method to obtain better object detectors by searching for feature pyramid networks and prediction heads, demonstrating superior performance compared to traditional models.
Neural Architecture Search (NAS) has shown great potential in effectively reducing manual effort in network design by automatically discovering optimal architectures. What is noteworthy is that as of now, object detection is less touched by NAS algorithms despite its significant importance in computer vision. To the best of our knowledge, most of the recent NAS studies on object detection tasks fail to satisfactorily strike a balance between performance and efficiency of the resulting models, let alone the excessive amount of computational resources cost by those algorithms. Here we propose an efficient method to obtain better object detectors by searching for the feature pyramid network as well as the prediction head of a simple anchor-free object detector, namely, FCOS (Tian et al. in FCOS: Fully convolutional one-stage object detection, 2019), using a tailored reinforcement learning paradigm. With carefully designed search space, search algorithms, and strategies for evaluating network quality, we are able to find top-performing detection architectures within 4 days using 8 V100 GPUs. The discovered architectures surpass state-of-the-art object detection models (such as Faster R-CNN, RetinaNet and, FCOS) by 1.0 to 5.4% points in AP on the COCO dataset, with comparable computation complexity and memory footprint, demonstrating the efficacy of the proposed NAS method for object detection. Code is available at .

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available