4.7 Article

Distributed Task Migration Optimization in MEC by Extending Multi-Agent Deep Reinforcement Learning Approach

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPDS.2020.3046737

Keywords

Task analysis; Reinforcement learning; Quality of service; Energy consumption; Optimization; Markov processes; Computational modeling; Energy; mobile edge computing; mobility; multi-agent reinforcement learning; task migration

Funding

  1. National Key Research and Development Program of China [2018YFB1701403]
  2. National Natural Science Foundation of China [62072165, 61876061, U19A2058]
  3. Zhijiang Lab, China [2020KE0AB01]

Ask authors/readers for more resources

Mobile edge computing (MEC) provides cloud-like capabilities to mobile users, with research focusing on task migration using reinforcement learning algorithms and distributed approaches. Experimental results show that the distributed task migration algorithm can significantly reduce the average completion time of tasks.
Closer to mobile users geographically, mobile edge computing (MEC) can provide some cloud-like capabilities to users more efficiently. This enables it possible for resource-limited mobile users to offload their computation-intensive and latency-sensitive tasks to MEC nodes. For its great benefits, MEC has drawn wide attention and extensive works have been done. However, few of them address task migration problem caused by distributed user mobility, which can't be ignored with quality of service (QoS) consideration. In this article, we study task migration problem and try to minimize the average completion time of tasks under migration energy budget. There are multiple independent users and the movement of each mobile user is memoryless with a sequential decision-making process, thus reinforcement learning algorithm based on Markov chain model is applied with low computation complexity. To further facilitate cooperation among users, we devise a distributed task migration algorithm based on counterfactual multi-agent (COMA) reinforcement learning approach to solve this problem. Extensive experiments are carried out to assess the performance of this distributed task migration algorithm. Compared with no migrating (NM) and single-agent actor-critic (AC) algorithms, the proposed distributed task migration algorithm can achieve up 30-50 percent reduction about average completion time.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available