期刊
出版社
IEEE
DOI: 10.1109/icc40277.2020.9149124
关键词
mobile edge computing; computation offloading; resource allocation; deep reinforcement learning
资金
- U.S. National Science Foundation (NSF) [ECCS-1711087]
We study the problem of computation offloading and resource allocation in multi- user multi-channel mobile edge computing (MEC) systems. Each user equipment (UE) in the system has a computation-intensive and time-sensitive task that needs to be executed either locally or remotely in an MEC server. All UE tasks have individual deadline constraints that are treated as soft constraints. The MEC server has limited computational resources, which impose hard constraints on the overall offloading computation capacity. The objective is to minimize a cost function that is expressed as a weighted sum of energy consumption, delay, and deadline penalty of all UEs. The optimum design is performed with respect to three decision parameters: whether to offload a given task, which wireless channel to use during offloading, and how much MEC resources should be allocated for an offloaded task. We apply a Deep Reinforcement Learning approach known as Deep Deterministic Policy Gradient to solve the problem. Simulation results demonstrate that the proposed algorithm outperforms other existing schemes such as Deep QNetwork (DQN).
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据