4.6 Article

Dynamic Navigation in Unconstrained Environments Using Reinforcement Learning Algorithms

Related references

Note: Only part of the references are listed.
Article Automation & Control Systems

Model-Based Reinforcement Learning Control of Electrohydraulic Position Servo Systems

Zhikai Yao et al.

Summary: This article introduces a new model-based reinforcement learning controller to improve the control performance of hydraulic systems. By using a recursive robust integral control approach and an actor-critic structure in the reinforcement learning approach, the proposed controller is able to achieve high-accuracy tracking and asymptotic stability guarantees at the system level.

IEEE-ASME TRANSACTIONS ON MECHATRONICS (2023)

Review Chemistry, Analytical

Bio-Inspired Optimization-Based Path Planning Algorithms in Unmanned Aerial Vehicles: A Survey

Sabitri Poudel et al.

Summary: Advancements in electronics and software have led to the rapid development of unmanned aerial vehicles (UAVs) and their applications. Path planning is a crucial aspect of UAV communications, and bio-inspired algorithms have emerged as a potential solution. However, there is currently no survey on the existing bio-inspired algorithms for UAV path planning. This study investigates and compares various bio-inspired algorithms extensively, highlighting their key features, working principles, advantages, and limitations. It also discusses the challenges and future trends in UAV path planning.

SENSORS (2023)

Article Computer Science, Theory & Methods

Path-Planning for Unmanned Aerial Vehicles with Environment Complexity Considerations: A Survey

Michael Jones et al.

Summary: Unmanned aerial vehicles (UAVs) have the potential to be used in various scenarios where relying on human labor is risky or costly. Fleet of autonomous UAVs that can collaborate and independently manage their flight and tasks will create new opportunities but also pose research and regulatory challenges. Improvements in UAV construction, computing hardware, communication mechanisms, and sensors make it technically possible to commercially deploy fleets of autonomous UAVs.

ACM COMPUTING SURVEYS (2023)

Article Computer Science, Artificial Intelligence

Autonomous target tracking of multi-UAV: A two-stage deep reinforcement learning approach with expert experience

Jiahua Wang et al.

Summary: This paper proposes a novel two-stage deep reinforcement learning method for multi-UAV decision-making. It uses a sample generator combining artificial potential field with proportional-integral-derivative to produce expert experience data. The method introduces a two-stage reinforcement learning training method, where the policy network and critic network are pre-trained using expert data and then the excellent experience generated by the agent itself is used to guide the policy network, improving the efficiency of data utilization. Extensive simulation experiments demonstrate that the proposed method enables multi-UAV to continuously track the target in obstacle environments and significantly improves learning speed and convergence effect.

APPLIED SOFT COMPUTING (2023)

Article Computer Science, Information Systems

Multiagent Deep Reinforcement Learning With Demonstration Cloning for Target Localization

Ahmed Alagha et al.

Summary: This study proposes two novel multi-agent deep reinforcement learning models for target localization through search in complex environments. The first model utilizes proximal policy optimization, convolutional neural networks, Convolutional AutoEncoders, and breadth first search to obtain cooperative agents for fast and low-cost localization. The second model improves computational complexity by replacing the shaped reward with a simple sparse reward, using Expert Demonstrations to guide the learning of new agents. The proposed models are tested on a scenario of radioactive target localization and benchmarked against existing methods, showing efficacy in terms of localization time, cost, learning speed, and stability.

IEEE INTERNET OF THINGS JOURNAL (2023)

Proceedings Paper Engineering, Aerospace

Path planning of autonomous UAVs using reinforcement learning

Christos Chronis et al.

Summary: Autonomous Beyond Visual Line of Sight (BVLOS) Unmanned Aerial Vehicles (UAVs) with high-performance obstacle avoidance and navigation algorithms are capable of successfully navigating in unknown environments. The research findings demonstrate that using low-cost distance sensors, it is possible to navigate the drone around obstacles.

12TH EASN INTERNATIONAL CONFERENCE ON INNOVATION IN AVIATION & SPACE FOR OPENING NEW HORIZONS (2023)

Article Engineering, Aerospace

UAV path planning and collision avoidance in 3D environments based on POMPD and improved grey wolf optimizer

Wei Jiang et al.

Summary: This study proposes a 3D path planning and collision avoidance algorithm for unmanned aerial vehicles (UAVs) based on Partially Observable Markov Decision Process (POMDP) and improved grey wolf optimizer (GWO). The algorithm plans a flyable path using the improved GWO with level comparison (GWOLC) and models aircraft collision avoidance as POMDP. Simulation experiments demonstrate the effectiveness and robustness of the proposed algorithm.

AEROSPACE SCIENCE AND TECHNOLOGY (2022)

Review Engineering, Electrical & Electronic

Path Planning for Multiple Targets Interception by the Swarm of UAVs based on Swarm Intelligence Algorithms: A Review

Abhishek Sharma et al.

Summary: This paper focuses on the path planning problem for intercepting multiple aerial targets using a swarm of UAVs, with a comprehensive review of Swarm Intelligence algorithms applied in this context. The paper evaluates each algorithm by analyzing its merits and demerits, providing scholars and professionals in the field with an overview of current research in UAV swarm technology.

IETE TECHNICAL REVIEW (2022)

Article Remote Sensing

ANN estimation model for photogrammetry-based UAV flight planning optimisation

H. B. Makineci et al.

Summary: This study optimizes the impact of input parameters on outputs of UAV photogrammetry using artificial neural networks, and identifies that atmospheric conditions significantly affect battery status and flight time. The normalization process plays a crucial role in optimization.

INTERNATIONAL JOURNAL OF REMOTE SENSING (2022)

Article Engineering, Aerospace

A UAV Pursuit-Evasion Strategy Based on DDPG and Imitation Learning

Xiaowei Fu et al.

Summary: This paper proposes a UAV pursuit-evasion strategy based on DDPG and imitation learning. By using imitation learning to improve the exploration strategy of DDPG, the algorithm's exploration efficiency is increased, avoiding useless exploration. The simulation results show that the improved DDPG algorithm is more effective in improving training efficiency.

INTERNATIONAL JOURNAL OF AEROSPACE ENGINEERING (2022)

Article Engineering, Marine

Ship trajectory planning for collision avoidance using hybrid ARIMA-LSTM models

Misganaw Abebe et al.

Summary: This study proposes a hybrid ARIMA-LSTM model for predicting ship trajectories using AIS data, addressing the limitations of previous approaches in terms of accuracy and complexity. The results demonstrate that the proposed model can accurately estimate near-future trajectories and evaluate collision risks.

OCEAN ENGINEERING (2022)

Proceedings Paper Automation & Control Systems

T-PRM: Temporal Probabilistic Roadmap for Path Planning in Dynamic Environments

Matthias Huppi et al.

Summary: In this work, a novel sampling-based path-planning algorithm called Temporal-PRM is proposed to avoid obstacles in dynamic environments. The algorithm extends the original Probabilistic Roadmap (PRM) with the notion of time and uses a time-aware variant of the A* search algorithm to efficiently query the path. Experimental results show that the proposed path planner outperforms other state-of-the-art sampling-based solvers and can run onboard a flying robot in real-time.

2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) (2022)

Article Computer Science, Information Systems

Hybrid Path Planning Model for Multiple Robots Considering Obstacle Avoidance

Tianrui Zhang et al.

Summary: This article studies hybrid path planning models, such as multi-robot path planning and formation cooperative control, to solve the problem of insufficient task execution capability of a single robot under complex conditions. By improving the particle swarm algorithm and the artificial potential field method, global and local path planning models are proposed to enhance the path exploration capability and handling efficiency of robot formation.

IEEE ACCESS (2022)

Article Computer Science, Artificial Intelligence

Deep reinforcement learning for drone navigation using sensor data

Victoria J. Hodge et al.

Summary: Mobile robots like drones play a crucial role in surveillance, monitoring, and data collection in buildings, infrastructure, and environments. The research aims to achieve accurate and rapid problem localization through flexible, autonomous, and powerful decision-making mobile robots. The implementation of a generic adaptive navigation algorithm using deep reinforcement learning algorithms aims to improve accuracy and efficiency.

NEURAL COMPUTING & APPLICATIONS (2021)

Article Computer Science, Artificial Intelligence

An enhanced genetic algorithm for path planning of autonomous UAV in target coverage problems

Y. Volkan Pehlivanoglu et al.

Summary: This paper addresses the path planning problem of autonomous UAV in target coverage problems using artificial intelligent methods such as genetic algorithm, ant colony optimizer, Voronoi diagram, and clustering methods. The proposed enhancement methods in GA accelerate the convergence process, while the integration of collision points of cluster centers provides the best result in avoiding crashes with terrain surfaces.

APPLIED SOFT COMPUTING (2021)

Article Engineering, Aerospace

Hybrid path planning using positioning risk and artificial potential fields

Yujin Shin et al.

Summary: Traditional path generation algorithms often assume accurate knowledge of user and obstacle positions, but in reality, positioning accuracy can vary due to different factors. The proposed method in this study utilizes a blend of potential and positioning risk fields to generate a hybrid directional flow for safe and efficient path planning for unmanned vehicles.

AEROSPACE SCIENCE AND TECHNOLOGY (2021)

Article Computer Science, Hardware & Architecture

Key technologies for safe and autonomous drones

Mahmoud Hussein et al.

Summary: This paper presents key technologies supporting the development of drone systems, emphasizing their impact on economy, environment, and human life risks. It also discusses the contributions of the COMP4DRONES project towards improving existing technologies.

MICROPROCESSORS AND MICROSYSTEMS (2021)

Article Robotics

How to train your robot with deep reinforcement learning: lessons we have learned

Julian Ibarz et al.

Summary: Deep reinforcement learning has shown promise in enabling physical robots to learn complex skills in the real world, which presents numerous challenges in perception and movement. Real-world robotics provides a unique domain for evaluating deep RL algorithms, addressing challenges that are often overlooked in mainstream RL research.

INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH (2021)

Article Chemistry, Analytical

Sensors and Measurements for UAV Safety: An Overview

Eulalia Balestrieri et al.

Summary: The paper presents a classification of UAV safety solutions found in scientific literature, highlighting the fundamental role of sensors and measurements in this field. The proposed solutions cover a range of areas including flight test procedures, in-flight solutions, fault and damage detection, collision avoidance, safe landing, as well as ground solutions for testing and injury and damage quantification measurements.

SENSORS (2021)

Review Computer Science, Artificial Intelligence

Applications and Research avenues for drone-based models in logistics: A classification and review

Mohammad Moshref-Javadi et al.

Summary: Research on drone-based logistics models focuses on classification and comprehensive review to identify future research directions; existing studies mainly focus on e-commerce and healthcare, while other application areas require further exploration.

EXPERT SYSTEMS WITH APPLICATIONS (2021)

Article Computer Science, Artificial Intelligence

Adapted-RRT: novel hybrid method to solve three-dimensional path planning problem using sampling and metaheuristic-based algorithms

Farzad Kiani et al.

Summary: This study presents three novel versions of a hybrid method to assist autonomous robots in three-dimensional path planning. By improving the RRT algorithm and utilizing metaheuristic algorithms, these methods play an important role in selecting the next stations and generating optimal paths. Simulation results demonstrate the superior performance of these methods in UAV path planning compared to other algorithms.

NEURAL COMPUTING & APPLICATIONS (2021)

Proceedings Paper Automation & Control Systems

Autonomous Drone Racing with Deep Reinforcement Learning

Yunlong Song et al.

Summary: This research introduces a new approach to near-time-optimal trajectory generation for quadrotors using deep reinforcement learning and relative gate observations. The method is capable of computing near-time-optimal trajectories and adapting to changes in the environment, showing computational advantages over traditional trajectory optimization methods. The approach is evaluated on race tracks in simulation and with a physical quadrotor, achieving speeds of up to 60 km/h.

2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) (2021)

Article Engineering, Civil

Memory-Based Deep Reinforcement Learning for Obstacle Avoidance in UAV With Limited Environment Knowledge

Abhik Singla et al.

Summary: This paper presents a method for enabling a UAV quadrotor to autonomously avoid collisions with obstacles in unstructured and unknown indoor environments. The method, based on deep reinforcement learning, uses recurrent neural networks with temporal attention and outperforms prior works in terms of distance covered without collisions. Additionally, the technique reduces power wastage by minimizing oscillatory motion of the UAV.

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS (2021)

Article Computer Science, Artificial Intelligence

Inverse Learning for Data-Driven Calibration of Model-Based Statistical Path Planning

Marcel Menner et al.

Summary: This paper introduces a method for inverse learning of a control objective defined in terms of requirements and their joint probability distribution. Parametrized requirements and methods for estimating their parameters are also proposed. Results suggest that the proposed model and learning method enable a more natural and personalized driving style for autonomous vehicles.

IEEE TRANSACTIONS ON INTELLIGENT VEHICLES (2021)

Review Automation & Control Systems

Mixed-integer programming in motion planning

Daniel Ioan et al.

Summary: This paper reviews the past and current results and approaches in motion planning using Mixed-integer Programming (MIP). It highlights the efficiency of MIP in selecting from a limited number of alternatives or solving optimization problems over non-convex domains, as well as the importance of various experimental validations in the literature.

ANNUAL REVIEWS IN CONTROL (2021)

Article Computer Science, Information Systems

Path Planning and Obstacle Avoiding of the USV Based on Improved ACO-APF Hybrid Algorithm With Adaptive Early-Warning

Yanli Chen et al.

Summary: This study proposes an improved ACO-APF algorithm for path planning of USVs, which combines ant colony optimization and artificial potential field algorithms to enhance efficiency and safety in dynamic environments.

IEEE ACCESS (2021)

Article Automation & Control Systems

Self-Configuring Robot Path Planning With Obstacle Avoidance via Deep Reinforcement Learning

Bianca Sangiovanni et al.

Summary: This letter proposes a hybrid control methodology utilizing Deep Reinforcement Learning to train robot manipulators to avoid obstacles while performing tasks. A switching mechanism is enabled when close to obstacles, allowing for automatic adjustments to cope with unexpected objects in the workspace. The proposal was tested on a realistic robot manipulator simulated in a V-REP environment.

IEEE CONTROL SYSTEMS LETTERS (2021)

Article Computer Science, Artificial Intelligence

Towards Real-Time Path Planning through Deep Reinforcement Learning for a UAV in Dynamic Environments

Chao Yan et al.

JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS (2020)

Review Computer Science, Information Systems

Path planning techniques for unmanned aerial vehicles: A review, solutions, and challenges

Shubhani Aggarwal et al.

COMPUTER COMMUNICATIONS (2020)

Article Computer Science, Artificial Intelligence

A novel hybrid grey wolf optimizer algorithm for unmanned aerial vehicle (UAV) path planning

Chengzhi Qu et al.

KNOWLEDGE-BASED SYSTEMS (2020)

Proceedings Paper Computer Science, Artificial Intelligence

UAV Path Planning for Wireless Data Harvesting: A Deep Reinforcement Learning Approach

Harald Bayerlein et al.

2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM) (2020)

Proceedings Paper Automation & Control Systems

An Improved RRT* Path Planning Algorithm for Service Robot

Wei Wang et al.

PROCEEDINGS OF 2020 IEEE 4TH INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC 2020) (2020)

Article Computer Science, Information Systems

Trajectory Planning for UAV Based on Improved ACO Algorithm

Bo Li et al.

IEEE ACCESS (2020)

Article Engineering, Electrical & Electronic

Interference Management for Cellular-Connected UAVs: A Deep Reinforcement Learning Approach

Ursula Challita et al.

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS (2019)

Article Computer Science, Artificial Intelligence

Three dimensional path planning using Grey wolf optimizer for UAVs

Ram Kishan Dewangan et al.

APPLIED INTELLIGENCE (2019)

Article Computer Science, Information Systems

The Obstacle Detection Method of UAV Based on 2D Lidar

Lanxiang Zheng et al.

IEEE ACCESS (2019)

Article Computer Science, Information Systems

Reconnaissance Mission Conducted by UAV Swarms Based on Distributed PSO Path Planning Algorithms

Yubing Wang et al.

IEEE ACCESS (2019)

Proceedings Paper Computer Science, Information Systems

A Brief Review of the Intelligent Algorithm for Traveling Salesman Problem in UAV Route Planning

Yunpeng Xu et al.

PROCEEDINGS OF 2019 IEEE 9TH INTERNATIONAL CONFERENCE ON ELECTRONICS INFORMATION AND EMERGENCY COMMUNICATION (ICEIEC 2019) (2019)

Article Computer Science, Information Systems

Path Planning of UAVs Based on Collision Probability and Kalman Filter

Zhenyu Wu et al.

IEEE ACCESS (2018)

Article Robotics

Last-Centimeter Personal Drone Delivery: Field Deployment and User Interaction

Przemyslaw Mariusz Kornatowski et al.

IEEE ROBOTICS AND AUTOMATION LETTERS (2018)

Article Robotics

Surveillance Planning With Bezier Curves

Jan Faigl et al.

IEEE ROBOTICS AND AUTOMATION LETTERS (2018)

Review Engineering, Aerospace

Classifications, applications, and design challenges of drones: A review

M. Hassanalian et al.

PROGRESS IN AEROSPACE SCIENCES (2017)

Article Engineering, Aerospace

Dynamic optimal UAV trajectory planning in the National Airspace System via mixed integer linear programming

Mohammadreza Radmanesh et al.

PROCEEDINGS OF THE INSTITUTION OF MECHANICAL ENGINEERS PART G-JOURNAL OF AEROSPACE ENGINEERING (2016)

Article Engineering, Aerospace

Cooperative path planning with applications to target tracking and obstacle avoidance for multi-UAVs

Peng Yao et al.

AEROSPACE SCIENCE AND TECHNOLOGY (2016)

Article Multidisciplinary Sciences

Human-level control through deep reinforcement learning

Volodymyr Mnih et al.

NATURE (2015)

Article Automation & Control Systems

3-D Model-Based Tracking for UAV Indoor Localization

Celine Teuliere et al.

IEEE TRANSACTIONS ON CYBERNETICS (2015)

Article Engineering, Aerospace

UAV Path Planning in a Dynamic Environment via Partially Observable Markov Decision Process

Shankarachary Ragi et al.

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS (2013)

Article Engineering, Aerospace

Path Planning of Unmanned Aerial Vehicles using B-Splines and Particle Swarm Optimization

Jung Leng Foo et al.

JOURNAL OF AEROSPACE COMPUTING INFORMATION AND COMMUNICATION (2009)

Article Automation & Control Systems

Roadmap-based path planning - Using the Voronoi diagram for a clearance-based shortest path

Priyadarshi Bhattacharya et al.

IEEE ROBOTICS & AUTOMATION MAGAZINE (2008)