4.6 Article

CDTNet: Improved Image Classification Method Using Standard, Dilated and Transposed Convolutions

期刊

APPLIED SCIENCES-BASEL
卷 12, 期 12, 页码 -

出版社

MDPI
DOI: 10.3390/app12125984

关键词

CDTNet; dilated convolution; transposed convolution; feature fusion; receptive field

资金

  1. Basic and Applied Basic Research Fund of Guangdong Province [2019B1515120085]

向作者/读者索取更多资源

This study introduces CDTNet, an image classification model based on convolutional neural networks. CDTNet utilizes two branches with different dilation rates to capture multi-scale features and recovers low-resolution information through transposed convolution. Experimental results demonstrate that CDTNet outperforms state-of-the-art models on multiple benchmark datasets with lower loss, higher accuracy, and faster convergence speed.
Convolutional neural networks (CNNs) have achieved great success in image classification tasks. In the process of a convolutional operation, a larger input area can capture more context information. Stacking several convolutional layers can enlarge the receptive field, but this increases the parameters. Most CNN models use pooling layers to extract important features, but the pooling operations cause information loss. Transposed convolution can increase the spatial size of the feature maps to recover the lost low-resolution information. In this study, we used two branches with different dilated rates to obtain different size features. The dilated convolution can capture richer information, and the outputs from the two channels are concatenated together as input for the next block. The small size feature maps of the top blocks are transposed to increase the spatial size of the feature maps to recover low-resolution prediction maps. We evaluated the model on three image classification benchmark datasets (CIFAR-10, SVHN, and FMNIST) with four state-of-the-art models, namely, VGG16, VGG19, ResNeXt, and DenseNet. The experimental results show that CDTNet achieved lower loss, higher accuracy, and faster convergence speed in the training and test stages. The average test accuracy of CDTNet increased by 54.81% at most on SVHN with VGG19 and by 1.28% at least on FMNIST with VGG16, which proves that CDTNet has better performance and strong generalization abilities, as well as fewer parameters.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据