4.6 Article

Image classification model based on large kernel attention mechanism and relative position self-attention mechanism

期刊

PEERJ COMPUTER SCIENCE
卷 9, 期 -, 页码 -

出版社

PEERJ INC
DOI: 10.7717/peerj-cs.1344

关键词

Attention mechanism; Image classification; Deep learning; Computer vision; Convolutional neural networks

向作者/读者索取更多资源

This paper proposes a hybrid model combining CNN and Transformer to optimize their shortcomings in capturing global feature representation and deteriorating local feature details. The model achieves accurate localization of features and efficient capture of long-range relationships on a large receptive field. Experimental results demonstrate the excellent performance of the proposed model on cifar10, cifar100, and birds400 datasets with fewer model parameters.
The Transformer has achieved great success in many computer vision tasks. With the in-depth exploration of it, researchers have found that Transformers can better obtain long-range features than convolutional neural networks (CNN). However, there will be a deterioration of local feature details when the Transformer extracts local features. Although CNN is adept at capturing the local feature details, it cannot easily obtain the global representation of features. In order to solve the above problems effectively, this paper proposes a hybrid model consisting of CNN and Transformer inspired by Visual Attention Net (VAN) and CoAtNet. This model optimizes its shortcomings in the difficulty of capturing the global representation of features by introducing Large Kernel Attention (LKA) in CNN while using the Transformer blocks with relative position self-attention variant to alleviate the problem of detail deterioration in local features of the Transformer. Our model effectively combines the advantages of the above two structures to obtain the details of local features more accurately and capture the relationship between features far apart more efficiently on a large receptive field. Our experiments show that in the image classification task without additional training data, the proposed model in this paper can achieve excellent results on the cifar10 dataset, the cifar100 dataset, and the birds400 dataset (a public dataset on the Kaggle platform) with fewer model parameters. Among them, SE_LKACAT achieved a Top-1 accuracy of 98.01% on the cifar10 dataset with only 7.5M parameters.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据