4.7 Article

Distributed Energy-Efficient Multi-UAV Navigation for Long-Term Communication Coverage by Deep Reinforcement Learning

期刊

IEEE TRANSACTIONS ON MOBILE COMPUTING
卷 19, 期 6, 页码 1274-1285

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TMC.2019.2908171

关键词

Navigation; Energy consumption; Reinforcement learning; Path planning; Drones; Three-dimensional displays; UAV control; deep reinforcement learning; energy efficiency; communication coverage

资金

  1. National Natural Science Foundation of China [61772072]
  2. US National Science Foundation [1525920, 1704662]
  3. Division Of Computer and Network Systems
  4. Direct For Computer & Info Scie & Enginr [1704662, 1525920] Funding Source: National Science Foundation

向作者/读者索取更多资源

In this paper, we aim to design a fully-distributed control solution to navigate a group of unmanned aerial vehicles (UAVs), as the mobile Base Stations (BSs) to fly around a target area, to provide long-term communication coverage for the ground mobile users. Different from existing solutions that mainly solve the problem from optimization perspectives, we proposed a decentralized deep reinforcement learning (DRL) based framework to control each UAV in a distributed manner. Our goal is to maximize the temporal average coverage score achieved by all UAVs in a task, maximize the geographical fairness of all considered point-of-interests (PoIs), and minimize the total energy consumptions, while keeping them connected and not flying out of the area border. We designed the state, observation, action space, and reward in an explicit manner, and model each UAV by deep neural networks (DNNs). We conducted extensive simulations and found the appropriate set of hyperparameters, including experience replay buffer size, number of neural units for two fully-connected hidden layers of actor, critic, and their target networks, and the discount factor for remembering the future reward. The simulation results justified the superiority of the proposed model over the state-of-the-art DRL-EC approach based on deep deterministic policy gradient (DDPG), and three other baselines.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据