4.6 Article

Differentiable Architecture Search Based on Coordinate Descent

Journal

IEEE ACCESS
Volume 9, Issue -, Pages 48544-48554

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2021.3068766

Keywords

Computer architecture; Microprocessors; Training; Search problems; Architecture; Task analysis; Network architecture; Automatic machine learning (AutoML); differentiable architecture search (DARTS); neural architecture search (NAS)

Funding

  1. Center for Applied Research in Artificial Intelligence (CARAI) Grant through the Defense Acquisition Program Administration (DAPA) [UD190031RD]
  2. Center for Applied Research in Artificial Intelligence (CARAI) Grant through Agency for Defense Development (ADD) [UD190031RD]

Ask authors/readers for more resources

The study introduces a differentiable architecture search method, DARTS-CD, based on coordinate descent algorithm, which aims to search for optimal operations on a single sampled edge per training step to improve efficiency. By optimizing each edge separately, DARTS-CD shows faster convergence and comparable performance with other efficient search algorithms.
Neural architecture search (NAS) is an automated method searching for the optimal network architecture by optimizing the combinations of edges and operations. For efficiency, recent differentiable architecture search methods adopt a one-shot network, containing all the candidate operations in each edge, instead of sampling and training individual architectures. However, a recent study doubts the effectiveness of differentiable methods by showing that random search can achieve comparable performance with differentiable methods using the same search cost. Therefore, there is a need to reduce the search cost even for previous differentiable methods. For more efficient differentiable architecture search, we propose a differentiable architecture search based on coordinate descent (DARTS-CD) that searches for optimal operation over only one sampled edge per training step. DARTS-CD is proposed based on the coordinate descent algorithm, which is an efficient learning method for resolving large-scale problems by updating only a subset of parameters. In DARTS-CD, one edge is randomly sampled, in which all the operations are performed, whereas only one operation is applied to the other edges. Weight update is also performed only at the sampled edge. By optimizing each edge separately, as in the coordinate descent that optimizes each coordinate individually, DARTS-CD converges much faster than DARTS while using the network architecture similar to that used for evaluation. We experimentally show that DARTS-CD performs comparably to the state-of-the-art efficient architecture search algorithms, with an extremely low search cost of 0.125 GPU days (1/12 of the search cost of DARTS) on CIFAR-10 and CIFAR-100. Furthermore, a warm-up regularization method is introduced to improve the exploration capability, which further enhances the performance.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available