4.8 Article

Deep-Reinforcement-Learning-Based Collision Avoidance in UAV Environment

Journal

IEEE INTERNET OF THINGS JOURNAL
Volume 9, Issue 6, Pages 4015-4030

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JIOT.2021.3118949

Keywords

Sensors; Unmanned aerial vehicles; Collision avoidance; Reinforcement learning; Vehicular ad hoc networks; Regulation; Industries; Collision avoidance; deep reinforcement learning; machine learning; multiaccess-edge computing (MEC); unmanned aerial vehicles (UAVs)

Funding

  1. Spanish National Project [PID2019-108713RB-C53]
  2. European Union [857031]

Ask authors/readers for more resources

Unmanned aerial vehicles (UAVs) have attracted attention from academia and industry due to their potential applications. The industry aims to extend commercial UAV operations to beyond visual line of sight (BVLOS) areas. However, this poses challenges for UAV autonomy in detecting and avoiding collisions. This article proposes probabilistic and deep-reinforcement-learning-based algorithms to address these challenges and optimize energy consumption.
Unmanned aerial vehicles (UAVs) have recently attracted both academia and industry representatives due to their utilization in tremendous emerging applications. Most UAV applications adopt visual line of sight (VLOS) due to ongoing regulations. There is a consensus between industry for extending UAVs' commercial operations to cover the urban and populated area-controlled airspace beyond VLOS (BVLOS). There is ongoing regulation for enabling BVLOS UAV management. Regrettably, this comes with unavoidable challenges related to UAVs' autonomy for detecting and avoiding static and mobile objects. An intelligent component should either be deployed onboard the UAV or at a multiaccess-edge computing (MEC) that can read the gathered data from different UAV's sensors, process them, and then make the right decision to detect and avoid the physical collision. The sensing data should be collected using various sensors but not limited to Lidar, depth camera, video, or ultrasonic. This article proposes probabilistic and deep-reinforcement-learning (DRL)-based algorithms for avoiding collisions while saving energy consumption. The proposed algorithms can be either run on top of the UAV or at the MEC according to the UAV capacity and the task overhead. We have designed and developed our algorithms to work for any environment without a need for any prior knowledge. The proposed solutions have been evaluated in a harsh environment that consists of many UAVs moving randomly in a small area without any correlation. The obtained results demonstrated the efficiency of these solutions for avoiding the collision while saving energy consumption in familiar and unfamiliar environments.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available