4.7 Article

Dynamic Offloading for Multiuser Muti-CAP MEC Networks: A Deep Reinforcement Learning Approach

Journal

IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY
Volume 70, Issue 3, Pages 2922-2927

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TVT.2021.3058995

Keywords

Task analysis; Energy consumption; Vehicle dynamics; Energy measurement; System performance; Time-varying systems; Time measurement; DQN; dynamic optimization problem; MEC

Funding

  1. NSFC [61871139/61801132]
  2. International Science and Technology Cooperation Projects of Guangdong Province [2020A0505100060]
  3. Natural Science Foundation of Guangdong Province [2017A030308006, 2018A030310338, 2020A1515010484]
  4. Science and Technology Program of Guangzhou [201807010103]
  5. Guangzhou University [YK2020008]
  6. Science and Technology Development Fund, Macau SAR [0003/2019/A1, 0018/2019/AMJ]
  7. Ministry of Science and Technology of the People'sRepublic of China [0018/2019/AMJ]
  8. Major Program of Guangdong Basic and Applied Research [2019B030302002]
  9. Science and Technology Major Project of Guangzhou [202007030006]
  10. Non-Recurring Engineering of Huawei Technology Company OAA [20121100507097B]

Ask authors/readers for more resources

In this paper, a multiuser mobile edge computing network is studied where tasks can be partially offloaded to multiple computational access points. A novel offloading strategy based on DQN is proposed, allowing users to dynamically fine-tune the offloading proportion to optimize system performance. Simulation results demonstrate the advantages of the proposed DQN-based offloading strategy over conventional methods.
In this paper, we study a multiuser mobile edge computing (MEC) network, where tasks from users can be partially offloaded to multiple computational access points (CAPs). We consider practical cases where task characteristics and computational capability at the CAPs may be time-varying, thus, creating a dynamic offloading problem. To deal with this problem, we first formulate it as a Markov decision process (MDP), and then introduce the state and action spaces. We further design a novel offloading strategy based on the deep Q network (DQN), where the users can dynamically fine-tune the offloading proportion in order to ensure the system performance measured by the latency and energy consumption. Simulation results are finally presented to verify the advantages of the proposed DQN-based offloading strategy over conventional ones.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available