期刊
IEEE TRANSACTIONS ON CLOUD COMPUTING
卷 11, 期 2, 页码 1733-1745出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCC.2022.3160129
关键词
Deep learning; transfer learning; knowledge distillation
In recent years, deep neural networks have excelled in practical learning tasks, but deploying them on resource-limited devices is challenging. Knowledge distillation transfers model knowledge from a well-trained model to a smaller one, reducing computational cost. A novel neuron manifold distillation method is proposed to improve accuracy-speed trade-offs and a confident prediction mechanism is introduced to enhance the reliability of cloud-based learning systems.
In recent years, deep neural networks have shown extraordinary power in various practical learning tasks, especially in object detection, classification, natural language processing. However, deploying such large models on resource-constrained devices or embedded systems is challenging due to their high computational cost. Efforts such as model partition, pruning, or quantization have been used at the expense of accuracy loss. Knowledge distillation is a technique that transfers model knowledge from a well-trained model (teacher) to a smaller and shallow model (student). Instead of using a learning model on the cloud, we can deploy distilled models on various edge devices, significantly reducing the computational cost, memory usage and prolonging the battery lifetime. In this work, we propose a novel neuron manifold distillation (NMD) method, where the student models imitate the teacher's output distribution and learn the feature geometry of the teacher model. In addition, to further improve the cloud-based learning system reliability, we propose a confident prediction mechanism to calibrate the model predictions. We conduct experiments with different distillation configurations over multiple datasets. Our proposed method demonstrates a consistent improvement in accuracy-speed trade-offs for the distilled model.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据