4.7 Article

MDLdroidLite: A Release-and-Inhibit Control Approach to Resource-Efficient Deep Neural Networks on Mobile Devices

期刊

IEEE TRANSACTIONS ON MOBILE COMPUTING
卷 21, 期 10, 页码 3670-3686

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TMC.2021.3062575

关键词

Training; Adaptation models; Neurons; Data models; Smart phones; Brain modeling; Mobile computing; Mobile deep learning; deep neural networks; dynamic optimization control; resource constraint

资金

  1. Australian Research Council (ARC) [DP180103932, DP190101888]

向作者/读者索取更多资源

Mobile deep learning (MDL) is a privacy-preserving learning paradigm with unique features like continual learning and low-latency inference. This paper introduces MDLdroidLite, an on-device deep learning framework that optimizes model structures and parameter adaptation mechanisms to achieve resource efficiency and fast training.
Mobile deep learning (MDL) has emerged as a privacy-preserving learning paradigm for mobile devices. This paradigm offers unique features such as privacy preservation, continual learning and low-latency inference to the building of personal mobile sensing applications. However, squeezing Deep Learning to mobile devices is extremely challenging due to resource constraint. Traditional Deep Neural Networks (DNNs) are usually over-parametered, hence incurring huge resource overhead for on-device learning. In this paper, we present a novel on-device deep learning framework named MDLdroidLite that transforms traditional DNNs into resource-efficient model structures for on-device learning. To minimize resource overhead, we propose a novel release-and-inhibit control (RIC) approach based on Model Predictive Control theory to efficiently grow DNNs from tiny to backbone. We also design a gate-based fast adaptation mechanism for channel-level knowledge transformation to quickly adapt new-born neurons with existing neurons, enabling safe parameter adaptation and fast convergence for on-device training. Our evaluations show that MDLdroidLite boosts on-device training on various PMS datasets with 28x to 50x less model parameters, 4x to 10x less floating number operations than the state-of-the-art model structures while keeping the same accuracy level.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据