3.8 Proceedings Paper

Deep Reinforcement Learning based Computation Offloading and Resource Allocation for MEC

Mobile edge computing (MEC) has the potential to enable computation-intensive applications in 5G networks. MEC can extend the computational capacity at the edge of wireless networks by migrating the computation-intensive tasks to the MEC server. In this paper, we consider a multi-user MEC system, where multiple user equipments (UEs) can perform computation offloading via wireless channels to an MEC server. We formulate the sum cost of delay and energy consumptions for all UEs as our optimization objective. In order to minimize the sum cost of the considered MEC system, we jointly optimize the offloading decision and computational resource allocation. However, it is challenging to obtain an optimal policy in such a dynamic system. Besides immediate reward, Reinforcement Learning (RL) also takes a long-term goal into consideration, which is very important to a time-variant dynamic systems, such as our considered multi-user wireless MEC system. To this end, we propose RL-based optimization framework to tackle the resource allocation in wireless MEC. Specifically, the Q-learning based and Deep Reinforcement Learning (DRL) based schemes are proposed, respectively. Simulation results show that the proposed scheme achieves significant reduction on the sum cost compared to other baselines.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据