4.7 Article

Pruning by explaining: A novel criterion for deep neural network pruning

期刊

PATTERN RECOGNITION
卷 115, 期 -, 页码 -

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2021.107899

关键词

Pruning; Layer-wise relevance propagation (LRP); Convolutional neural network (CNN); Interpretation of models

资金

  1. German Ministry for Education and Research (BMBF) [01IS18025A, 01IS18037A, 01IS17058, 031L0207D, 01IS18056A, 01GQ1115, 01GQ0850]
  2. Deutsche Forschungsgesellschaft (DFG) [EXC 2046/1, 390685689]
  3. Institute of Information & Communications Technology Planning & Evaluation (IITP) - Korea Government [20190-00079]
  4. Artificial Intelligence Graduate School Program, Korea University
  5. ST-SUTD Cyber Security Corporate Laboratory
  6. AcRF Tier2 grant [MOE2016-T2-2-154]
  7. TL project Intent Inference
  8. SUTD internal grant Fundamentals and Theory of AI Systems

向作者/读者索取更多资源

This paper proposes a novel criterion for CNN pruning, inspired by neural network interpretability, to automatically find the most relevant weights or filters using relevance scores obtained from concepts of explainable AI (XAI). The method efficiently prunes CNN models in transfer-learning setups and outperforms existing criteria in resource-constrained scenarios. The approach allows for iterative model compression while maintaining or improving accuracy, with computational cost similar to gradient computation and simplicity in application without hyperparameter tuning for pruning.
The success of convolutional neural networks (CNNs) in various applications is accompanied by a sig-nificant increase in computation and parameter storage costs. Recent efforts to reduce these overheads involve pruning and compressing the weights of various layers while at the same time aiming to not sacrifice performance. In this paper, we propose a novel criterion for CNN pruning inspired by neural network interpretability: The most relevant units, i.e. weights or filters, are automatically found using their relevance scores obtained from concepts of explainable AI (XAI). By exploring this idea, we connect the lines of interpretability and model compression research. We show that our proposed method can efficiently prune CNN models in transfer-learning setups in which networks pre-trained on large corpora are adapted to specialized tasks. The method is evaluated on a broad range of computer vision datasets. Notably, our novel criterion is not only competitive or better compared to state-of-the-art pruning criteria when successive retraining is performed, but clearly outperforms these previous criteria in the resource-constrained application scenario in which the data of the task to be transferred to is very scarce and one chooses to refrain from fine-tuning. Our method is able to compress the model iteratively while maintaining or even improving accuracy. At the same time, it has a computational cost in the order of gradient computation and is comparatively simple to apply without the need for tuning hyperparameters for pruning. (c) 2021 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license ( http://creativecommons.org/licenses/by/4.0/ )

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据