期刊
COMPUTER NETWORKS
卷 199, 期 -, 页码 -出版社
ELSEVIER
DOI: 10.1016/j.comnet.2021.108397
关键词
Unmanned aerial vehicles; Task offloading; Edge computing; Q-learning
类别
资金
- Deanship of Scientific Research at King Saud University through the Vice Deanship of Scientific Research Chairs: Chair of Smart Technologies
The use of UAVs is common and they often offload tasks to edge servers for cost-saving. This paper proposes an optimized task offloading strategy using reinforcement learning algorithm, which shows better convergence and performance in practical application scenarios.
Unmanned aerial vehicle (UAV) have been deployed in many applications, such as Power Grid inspection, forest fire prevention, and pollution surveillance. They often cruise along a fixed route above the target area. Due to the cost of remote communication and local computationally intensive tasks, resource-constrained drones tend to offload tasks to edge servers. In most cases, drones do not know the prior knowledge of user nodes and edge servers, and must reduce the altitude to provide services. Therefore, it is necessary to carefully decide when and where to collect and offload tasks to avoid unnecessary energy consumption and time delays. In this paper, we propose the benefit maximization problem under constraints such as time sensitivity, and propose an optimized task offloading strategy based on the reinforcement learning algorithm. We strive to directly solve the deficiencies in the profit maximization problem with modified Q-Learning algorithm. We test the performance under practical application scenarios with different environmental parameters. The experimental results prove that the solution proposed in this paper has better convergence and performance, as well as better reusability in similar application scenarios.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据