Journal
PHYSICAL COMMUNICATION
Volume 62, Issue -, Pages -Publisher
ELSEVIER
DOI: 10.1016/j.phycom.2023.102240
Keywords
Mobile edge computing; Unmanned aerial vehicle; Deep reinforcement learning; Offload pairing
Ask authors/readers for more resources
This paper proposes a UAV-enabled MEC network to optimize transmission delay, computation delay, and system energy consumption, and achieves the optimal solution through an intelligent optimization algorithm based on Deep Dueling Double Q-Network and Twin Delayed Deep Deterministic Policy Gradient.
Dynamically moving Unmanned Aerial Vehicles (UAVs) have emerged as an effective means to significantly enhance the flexibility and transmission performance of mobile edge computing (MEC). However, in practical scenarios, UAVs often face limitations in terms of data storage capacity and computational power. In this paper, a UAV-enabled MEC network with multiple users and multiple edge computing servers is proposed, where the UAV is equipped with limited-size buffers. An optimization problem is formulated to jointly optimize UAV flight trajectories, offload server pairings, task offload ratios, and UAV transmit power to minimize transmission delay, computation delay, and system energy consumption. To tackle the intractable non-convex optimization issue, an intelligent optimization algorithm based on Deep Dueling Double Q-Network (D3QN)-Twin Delayed Deep Deterministic Policy Gradient (TD3) is proposed, which is able to efficiently determine the optimal solution. Simulation results demonstrate that our proposed intelligent algorithm exhibits good convergence and achieves a favorable balance between delay and energy consumption.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available