4.7 Article

Reinforcement Learning for a Cellular Internet of UAVs: Protocol Design, Trajectory Control, and Resource Management

Journal

IEEE WIRELESS COMMUNICATIONS
Volume 27, Issue 1, Pages 116-123

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/MWC.001.1900262

Keywords

Sensors; Reinforcement learning; Unmanned aerial vehicles; Task analysis; Resource management; Internet of Things; Trajectory

Funding

  1. National Natural Science Foundation of China [61625101]
  2. U.S. AFOSR [MURI 18RT0073, MURI FA9550-181-0502]
  3. NSF [EARS-1839818, CNS1717454, CNS1731424, CCF-1908308, CNS-1646607]

Ask authors/readers for more resources

Unmanned aerial vehicles (UAVs) can be powerful Internet of Things components to execute sensing tasks over the next-generation cellular networks, which are generally referred to as the cellular Internet of UAVs. However, due to the high mobility of UAVs and shadowing in airto- ground channels, UAVs operate in a dynamic and uncertain environment. Therefore, UAVs need to improve the quality of service of sensing and communication without complete information, which makes reinforcement learning suitable for use in the cellular Internet of UAVs. In this article, we propose a distributed sense-and-send protocol to coordinate UAVs for sensing and transmission. Then we apply reinforcement learning in the cellular Internet of UAVs to solve key problems such as trajectory control and resource management. Finally, we point out several potential future research directions.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available