期刊
PHYSICAL COMMUNICATION
卷 62, 期 -, 页码 -出版社
ELSEVIER
DOI: 10.1016/j.phycom.2023.102240
关键词
Mobile edge computing; Unmanned aerial vehicle; Deep reinforcement learning; Offload pairing
This paper proposes a UAV-enabled MEC network to optimize transmission delay, computation delay, and system energy consumption, and achieves the optimal solution through an intelligent optimization algorithm based on Deep Dueling Double Q-Network and Twin Delayed Deep Deterministic Policy Gradient.
Dynamically moving Unmanned Aerial Vehicles (UAVs) have emerged as an effective means to significantly enhance the flexibility and transmission performance of mobile edge computing (MEC). However, in practical scenarios, UAVs often face limitations in terms of data storage capacity and computational power. In this paper, a UAV-enabled MEC network with multiple users and multiple edge computing servers is proposed, where the UAV is equipped with limited-size buffers. An optimization problem is formulated to jointly optimize UAV flight trajectories, offload server pairings, task offload ratios, and UAV transmit power to minimize transmission delay, computation delay, and system energy consumption. To tackle the intractable non-convex optimization issue, an intelligent optimization algorithm based on Deep Dueling Double Q-Network (D3QN)-Twin Delayed Deep Deterministic Policy Gradient (TD3) is proposed, which is able to efficiently determine the optimal solution. Simulation results demonstrate that our proposed intelligent algorithm exhibits good convergence and achieves a favorable balance between delay and energy consumption.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据