4.7 Article

Dictionary-enabled efficient training of ConvNets for image classification

Journal

IMAGE AND VISION COMPUTING
Volume 135, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.imavis.2023.104718

Keywords

Sparse representation; Convolution neural networks; Deep learning; Dictionary learning; Image classification

Ask authors/readers for more resources

This paper proposes a dictionary-based training method for ConvNets that reduces training time significantly while maintaining accuracy by exploiting redundancy in the training data. Experimental results on three publicly available datasets show a 4.5 times reduction in computational burden compared to state-of-the-art algorithms like ResNet-{18,34,50}, with comparable accuracy.
Convolutional networks (ConvNets) are computationally expensive but well known for their performance on image data. One way to reduce their complexity is to explore inherited data sparsity. However, since the gradi-ents involved in ConvNets require dynamic updates, applying data sparsity in the training step is not straightfor-ward. Dictionary-based learning methods can be useful since they encode the original data in a sparse form. This paper proposes a new dictionary-based training paradigm for ConvNets by exploiting redundancy in the training data while keeping the distinctive features intact. The ConvNet is then trained on the reduced, sparse dataset. The new approach significantly reduces the training time without compromising accuracy. To the best of our knowledge, this is the first implementation of ConvNet on dictionary-based sparse training data. The proposed method is validated on three publicly available datasets aeuroMNIST, USPS, and MNIST FASHION. The experimental results show a significant reduction of 4.5 times in the overall computational burden of vanilla ConvNet for all the datasets. Whereas the accuracy is intact at 97.21% for MNIST, 96.81% for USPS, and 88.4% for FASHION datasets. These results are comparable to state-of-the-art algorithms, such as ResNet-{18,34,50}, trained on the full training dataset.& COPY; 2023 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available