4.6 Article

Wavelet-Attention CNN for image classification

Journal

MULTIMEDIA SYSTEMS
Volume 28, Issue 3, Pages 915-924

Publisher

SPRINGER
DOI: 10.1007/s00530-022-00889-8

Keywords

Convolutional neural network; Wavelet Transform; Wavelet-Attention; Image classification

Funding

  1. National Key R&D Program of China [2018AAA0102001]
  2. National Natural Science Foundation of China [62072245, 61932020]
  3. Natural Science Foundation of Jiangsu Province [BK20211520]

Ask authors/readers for more resources

This paper investigates the issues in feature learning methods based on CNN and proposes a new module based on wavelet attention for image classification. Experimental results demonstrate significant improvements in accuracy using this approach.
The feature learning methods based on convolutional neural network (CNN) have successfully produced tremendous achievements in image classification tasks. However, the inherent noise and some other factors may weaken the effectiveness of the convolutional feature statistics. In this paper, we investigate Discrete Wavelet Transform (DWT) in the frequency domain and design a new Wavelet-Attention (WA) block to only implement attention in the high-frequency domain. Based on this, we propose a Wavelet-Attention convolutional neural network (WA-CNN) for image classification. Specifically, WA-CNN decomposes the feature maps into low-frequency and high-frequency components for storing the structures of the basic objects, as well as the detailed information and noise, respectively. Then, the WA block is leveraged to capture the detailed information in the high-frequency domain with different attention factors but reserves the basic object structures in the low-frequency domain. Experimental results on CIFAR-10 and CIFAR-100 datasets show that our proposed WA-CNN achieves significant improvements in classification accuracy compared to other related networks. Specifically, based on MobileNetV2 backbones, WA-CNN achieves 1.26% Top-1 accuracy improvement on the CIFAR-10 benchmark and 1.54% Top-1 accuracy improvement on the CIFAR-100 benchmark.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available