Journal
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
Volume 34, Issue 2, Pages 475-488Publisher
IEEE COMPUTER SOC
DOI: 10.1109/TPDS.2022.3222509
Keywords
Mobile edge computing; fused-layer; DNN inference; partial offloading; model parallelism
Ask authors/readers for more resources
With the rapid development of IoT and deep learning, there is an urgent need to enable deep learning inference on IoT devices in MEC. To address the computation limitation, computation offloading is proposed as a promising approach. This paper proposes a novel FL-based DNN model parallelism method to accelerate inference by converting a DNN layer into several smaller layers for increased offloading flexibility.
With the rapid development of Internet-of-Things (IoT) and the explosive advance of deep learning, there is an urgent need to enable deep learning inference on IoT devices in Mobile Edge Computing (MEC). To address the computation limitation of IoT devices in processing complex Deep Neural Networks (DNNs), computation offloading is proposed as a promising approach. Recently, partial computation offloading is developed to dynamically adjust task assignment strategy in different channel conditions for better performance. In this paper, we take advantage of intrinsic DNN computation characteristics and propose a novel Fused-Layer-based (FL-based) DNN model parallelism method to accelerate inference. The key idea is that a DNN layer can be converted to several smaller layers in order to increase partial computation offloading flexibility, and thus further create the better computation offloading solution. However, there is a trade-off between computation offloading flexibility as well as model parallelism overhead. Then, we investigate the optimal DNN model parallelism and the corresponding scheduling and offloading strategies in partial computation offloading. In particular, we propose a Particle Swarm Optimization with Minimizing Waiting (PSOMW) method, which explores and updates the FL strategy, path scheduling strategy, and path offloading strategy to reduce time complexity and avoid invalid solutions. Finally, we validate the effectiveness of the proposed method in commonly used DNNs. The results show that the proposed method can reduce the DNN inference time by an average of 12.75 times compared to the legacy No FL (NFL) algorithm, and is very close to the optimal solution achieved by the Brute Force (BF) algorithm with the difference of less than 0.04%.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available