3.8 Proceedings Paper

Mobile Cellular-Connected UAVs: Reinforcement Learning for Sky

Journal

Publisher

IEEE
DOI: 10.1109/GCWkshps50303.2020.9367580

Keywords

Reinforcement learning; multi-armed bandit; unmanned aerial vehicle (UAV); cellular networks; handover rate; energy efficiency

Funding

  1. Catalan and Spanish grants [2017-SGR-01479, RTI2018-099722-B-I00]

Ask authors/readers for more resources

A cellular-connected unmanned aerial vehicle (UAV) faces several key challenges concerning connectivity and energy efficiency. Through a learning-based strategy, we propose a general novel multi-armed bandit (MAB) algorithm to reduce disconnectivity time, handover rate, and energy consumption of UAV by taking into account its time of task completion. By formulating the problem as a function of UAV's velocity, we show how each of these performance indicators (PIs) is improved by adopting a proper range of corresponding learning parameter, e.g. 50% reduction in HO rate as compared to a blind strategy. However, results reveal that the optimal combination of the learning parameters depends critically on any specific application and the weights of PIs on the final objective function.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available