3.8 Proceedings Paper

GLiT: Neural Architecture Search for Global and Local Image Transformer

出版社

IEEE
DOI: 10.1109/ICCV48922.2021.00008

关键词

-

资金

  1. Australian Research Council [DP200103223, FT210100228]
  2. Australian Medical Research Future Fund [MRFAI000085]
  3. Australian Research Council [FT210100228] Funding Source: Australian Research Council

向作者/读者索取更多资源

The paper introduces a new Neural Architecture Search (NAS) method to find a better transformer architecture for image recognition. By incorporating a locality module and new search algorithms, the method allows for a trade-off between global and local information, as well as optimizing low-level design choices in each module. Through extensive experiments on the ImageNet dataset, the method demonstrates the ability to find more efficient and discriminative transformer variants compared to existing models like ResNet101 and ViT.
We introduce the first Neural Architecture Search (NAS) method to find a better transformer architecture for image recognition. Recently, transformers without CNN-based backbones are found to achieve impressive performance for image recognition. However, the transformer is designed for NLP tasks and thus could be sub-optimal when directly used for image recognition. In order to improve the visual representation ability for transformers, we propose a new search space and searching algorithm. Specifically, we introduce a locality module that models the local correlations in images explicitly with fewer computational cost. With the locality module, our search space is defined to let the search algorithm freely trade off between global and local information as well as optimizing the low-level design choice in each module. To tackle the problem caused by huge search space, a hierarchical neural architecture search method is proposed to search the optimal vision transformer from two levels separately with the evolutionary algorithm. Extensive experiments on the ImageNet dataset demonstrate that our method can find more discriminative and efficient transformer variants than the ResNet family (e.g., ResNet101) and the baseline ViT for image classification. The source codes are available at https://github.com/bychen515/GLiT.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据