4.6 Article

Knowledge distillation in deep learning and its applications

期刊

PEERJ COMPUTER SCIENCE
卷 -, 期 -, 页码 -

出版社

PEERJ INC
DOI: 10.7717/peerj-cs.474

关键词

Knowledge distillation; Model compression; Student model; Teacher model; Transferring knowledge; Deep learning

向作者/读者索取更多资源

This paper presents an outlook on knowledge distillation techniques applied to deep learning models, introducing a new metric called distillation metric for comparing performances of different solutions. Interesting conclusions drawn from the survey, along with current challenges and possible research directions, are discussed in the paper.
Deep learning based models are relatively large, and it is hard to deploy such models on resource-limited devices such as mobile phones and embedded devices. One possible solution is knowledge distillation whereby a smaller model (student model) is trained by utilizing the information from a larger model (teacher model). In this paper, we present an outlook of knowledge distillation techniques applied to deep learning models. To compare the performances of different techniques, we propose a new metric called distillation metric which compares different knowledge distillation solutions based on models' sizes and accuracy scores. Based on the survey, some interesting conclusions are drawn and presented in this paper including the current challenges and possible research directions.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据