4.7 Article

Fast Filter Pruning via Coarse-to-Fine Neural Architecture Search and Contrastive Knowledge Transfer

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2023.3236336

关键词

Costs; Degradation; Knowledge transfer; Knowledge engineering; Computational efficiency; Convolutional neural networks; Training; Deep neural network; filter pruning; knowledge transfer (KT); smaller network

向作者/读者索取更多资源

Filter pruning is a representative technique for lightweighting CNNs. To increase the usability of CNNs, filter pruning itself needs to be lightweighted. Thus, a coarse-to-fine NAS algorithm and a fine-tuning structure based on CKT are proposed.
Filter pruning is the most representative technique for lightweighting convolutional neural networks (CNNs). In general, filter pruning consists of the pruning and fine-tuning phases, and both still require a considerable computational cost. So, to increase the usability of CNNs, filter pruning itself needs to be lightweighted. For this purpose, we propose a coarse-to-fine neural architecture search (NAS) algorithm and a fine-tuning structure based on contrastive knowledge transfer (CKT). First, candidates of subnetworks are coarsely searched by a filter importance scoring (FIS) technique, and then the best subnetwork is obtained by a fine search based on NAS-based pruning. The proposed pruning algorithm does not require a supernet and adopts a computationally efficient search process, so it can create a pruned network with higher performance at a lower cost than the existing NAS-based search algorithms. Next, a memory bank is configured to store the information of interim subnetworks, i.e., by-products of the above-mentioned subnetwork search phase. Finally, the fine-tuning phase delivers the information of the memory bank through a CKT algorithm. Thanks to the proposed fine-tuning algorithm, the pruned network accomplishes high performance and fast convergence speed because it can take clear guidance from the memory bank. Experiments on various datasets and models prove that the proposed method has a significant speed efficiency with reasonable performance leakage over the state-of-the-art (SOTA) models. For example, the proposed method pruned the ResNet-50 trained on Imagenet-2012 up to 40.01% with no accuracy loss. Also, since the computational cost amounts to only 210 GPU hours, the proposed method is computationally more efficient than SOTA techniques. The source code is publicly available at https://github.com/sseung0703/FFP.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据