4.8 Article

Deep-Reinforcement-Learning-Based Joint 3-D Navigation and Phase-Shift Control for Mobile Internet of Vehicles Assisted by RIS-Equipped UAVs

Journal

IEEE INTERNET OF THINGS JOURNAL
Volume 10, Issue 20, Pages 18054-18066

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JIOT.2023.3277598

Keywords

deep reinforcement learning (DRL); mobile Internet of Vehicles (IoVs); optimal trajectory; reconfigurable intelligent surfaces (RISs); unmanned aerial vehicles (UAVs); wireless communication; Autonomous navigation

Ask authors/readers for more resources

This paper proposes a method of using an unmanned aerial vehicle (RISeUAV) equipped with a reconfigurable intelligent surface (RIS) to solve the problem of UAV-assisted communication in 5G/6G networks. Simulation results show the effectiveness of the method.
Unmanned aerial vehicles (UAVs) are utilized to improve the performance of wireless communication networks (WCNs), notably, in the context of Internet of Things (IoT). However, the application of UAVs, as active aerial base stations (BSs)/relays, is questionable in the fifth-generation (5G) WCNs with quasi-optic millimeter wave (mmWave) and beyond in 6G (visible light) WCNs. Because path loss is high in 5G/6G networks that attenuate, even, the Line-of-Sight (LoS) communicating signals propagated by UAVs. Besides, the limited energy/size/weight of UAVs makes it cost-deficient to design aerial multi-input/output BSs for active beamforming to strengthen the signals. Equipping UAVs with the reconfigurable intelligent surface (RIS), a passive component, can help to address the problems with UAV-assisted communication in 5G and optical 6G networks. We propose adopting the RIS-equipped UAV (RISeUAV) to provide aerial LoS service and facilitate communication for mobile Internet-of-Vehicles (IoVs) in an obstructed dense urban area covered by 5G/6G. RISeUAV-aided wireless communication facilitates vehicle-to-vehicle/everything communication for IoVs for updating IoT information required for sensor fusion and autonomous driving. However, autonomous navigation of RISeUAV for this purpose is a multilateral problem and is computationally challenging for being optimally implemented in real time. We intelligently automated RISeUAV navigation using deep reinforcement learning to address the optimality and time complexity issues. Simulation results show the effectiveness of the method.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available