4.7 Article

Neural Combinatorial Deep Reinforcement Learning for Age-Optimal Joint Trajectory and Scheduling Design in UAV-Assisted Networks

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSAC.2021.3065049

关键词

Trajectory; Optimal scheduling; Unmanned aerial vehicles; Delays; Wireless networks; Throughput; Measurement; Age of information; unmanned aerial vehicles; deep reinforcement learning; convex optimization

资金

  1. U.S. National Science Foundation [CNS-1814477]
  2. Office of Naval Research (ONR) under MURI [N00014-19-1-2621]

向作者/读者索取更多资源

This article discusses a UAV-assisted wireless network where a battery-constrained UAV moves towards energy-constrained ground nodes to minimize Age of Information values through optimized flight trajectory and status updates scheduling. An initial mixed-integer program is formulated and a convex optimization-based solution is proposed for the UAV's optimal trajectory and updates scheduling. Additionally, a finite-horizon Markov decision process and neural combinatorial-based deep reinforcement learning algorithm are used to find the optimal scheduling policy, especially in large-scale scenarios.
In this article, an unmanned aerial vehicle (UAV)-assisted wireless network is considered in which a battery-constrained UAV is assumed to move towards energy-constrained ground nodes to receive status updates about their observed processes. The UAV's flight trajectory and scheduling of status updates are jointly optimized with the objective of minimizing the normalized weighted sum of Age of Information (NWAoI) values for different physical processes at the UAV. The problem is first formulated as a mixed-integer program. Then, for a given scheduling policy, a convex optimization-based solution is proposed to derive the UAV's optimal flight trajectory and time instants on updates. However, finding the optimal scheduling policy is challenging due to the combinatorial nature of the formulated problem. Therefore, to complement the proposed convex optimization-based solution, a finite-horizon Markov decision process (MDP) is used to find the optimal scheduling policy. Since the state space of the MDP is extremely large, a novel neural combinatorial-based deep reinforcement learning (NCRL) algorithm using deep Q-network (DQN) is proposed to obtain the optimal policy. However, for large-scale scenarios with numerous nodes, the DQN architecture cannot efficiently learn the optimal scheduling policy anymore. Motivated by this, a long short-term memory (LSTM)-based autoencoder is proposed to map the state space to a fixed-size vector representation in such large-scale scenarios while capturing the spatio-temporal interdependence between the update locations and time instants. A lower bound on the minimum NWAoI is analytically derived which provides system design guidelines on the appropriate choice of importance weights for different nodes. Furthermore, an upper bound on the UAV's minimum speed is obtained to achieve this lower bound value. The numerical results also demonstrate that the proposed NCRL approach can significantly improve the achievable NWAoI per process compared to the baseline policies, such as weight-based and discretized state DQN policies.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据