4.7 Article

A novel compact design of convolutional layers with spatial transformation towards lower-rank representation for image classification

期刊

KNOWLEDGE-BASED SYSTEMS
卷 255, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.knosys.2022.109723

关键词

Neural network compression; Tucker decomposition; Spatial transformation; Image classification

资金

  1. National Key Research and Development Program of China [2020YFB1313400]
  2. National Natural Science Foundation of China [61903358, 61773367, 61821005]
  3. Youth Innovation Promotion Association of the Chinese Academy of Sciences [2022196, Y202051]

向作者/读者索取更多资源

Convolutional neural networks (CNNs) can be inconvenient in situations with limited storage space due to their numerous parameters. This paper proposes a novel compact design for convolutional layers using spatial transformation to achieve a lower-rank form. The effectiveness of the method is validated in an image classification task.
Convolutional neural networks (CNNs) usually come with numerous parameters and thus are not convenient for some situations, such as when the storage space is limited. Low-rank decomposition is one effective way for network compression or compaction. However, the current methods are far from theoretical optimal compression performance because the low-rankness of the commonly trained convolution filter sets is limited because of the versatility of convolution filters. We propose a novel compact design for convolutional layers with spatial transformation for achieving a much lower-rank form. The convolution filters in our design are generated using a predefined Tucker product form, followed by learnable individual spatial transformations on each filter. The low-rank (Tucker) part lowers the parameter capacity while the transformation part enhances the feature representation capacity. We validate our proposed approach on an image classification task. Our approach focuses on compressing parameters while also improving accuracy. We perform experiments on the MNIST, CIFAR10, CIFAR100, and ImageNet datasets. On the ImageNet dataset, our approach outperforms low -rank based state-of-the-arts by 2% to 6% in top-1 validation accuracy. Furthermore, our approach outperforms a series of low-rank-based state-of-the-arts on various datasets. The experiments validate the efficacy of our proposed method. Our code is available at https://github.com/liubc17/low_rank_ compact_transformed.(c) 2022 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据