4.7 Article

Customizing a teacher for feature distillation

期刊

INFORMATION SCIENCES
卷 640, 期 -, 页码 -

出版社

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2023.119024

关键词

Neural network compression; Knowledge distillation; Knowledge transfer

向作者/读者索取更多资源

Knowledge distillation is a method to train a lightweight network by transferring class probability knowledge from a cumbersome teacher network. Several approaches have been proposed to transfer the teacher's knowledge at the feature map level.
Knowledge distillation is a method to train a lightweight network by transferring class probability knowledge from a cumbersome teacher network. However, transferring only the class probability knowledge would limit the distillation performance. Therefore, several approaches have been proposed to transfer the teacher's knowledge at the feature map level. In this paper, we revisit the feature distillation method and have found that the larger the teacher's architecture/capacity becomes, the more difficult it is for the student to imitate. Thus, the feature distillation method is unable to achieve its full potential. To address this, a novel end-to-end distillation framework, termed Customizing a Teacher for Feature Distillation (CTFD), is proposed to train a teacher to be more compatible with its student. In addition, we apply the customized teacher to three feature distillation methods. Moreover, data augmentation is used as a trick to train the student to improve its generalization performance. Extensive empirical experiments and analyses are conducted on three computer vision tasks, including image classification, transfer learning, and object detection, to substantiate the effectiveness of the proposed method.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据