4.6 Article

CDTNet: Improved Image Classification Method Using Standard, Dilated and Transposed Convolutions

Journal

APPLIED SCIENCES-BASEL
Volume 12, Issue 12, Pages -

Publisher

MDPI
DOI: 10.3390/app12125984

Keywords

CDTNet; dilated convolution; transposed convolution; feature fusion; receptive field

Funding

  1. Basic and Applied Basic Research Fund of Guangdong Province [2019B1515120085]

Ask authors/readers for more resources

This study introduces CDTNet, an image classification model based on convolutional neural networks. CDTNet utilizes two branches with different dilation rates to capture multi-scale features and recovers low-resolution information through transposed convolution. Experimental results demonstrate that CDTNet outperforms state-of-the-art models on multiple benchmark datasets with lower loss, higher accuracy, and faster convergence speed.
Convolutional neural networks (CNNs) have achieved great success in image classification tasks. In the process of a convolutional operation, a larger input area can capture more context information. Stacking several convolutional layers can enlarge the receptive field, but this increases the parameters. Most CNN models use pooling layers to extract important features, but the pooling operations cause information loss. Transposed convolution can increase the spatial size of the feature maps to recover the lost low-resolution information. In this study, we used two branches with different dilated rates to obtain different size features. The dilated convolution can capture richer information, and the outputs from the two channels are concatenated together as input for the next block. The small size feature maps of the top blocks are transposed to increase the spatial size of the feature maps to recover low-resolution prediction maps. We evaluated the model on three image classification benchmark datasets (CIFAR-10, SVHN, and FMNIST) with four state-of-the-art models, namely, VGG16, VGG19, ResNeXt, and DenseNet. The experimental results show that CDTNet achieved lower loss, higher accuracy, and faster convergence speed in the training and test stages. The average test accuracy of CDTNet increased by 54.81% at most on SVHN with VGG19 and by 1.28% at least on FMNIST with VGG16, which proves that CDTNet has better performance and strong generalization abilities, as well as fewer parameters.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available