4.7 Article

Vessel-following model for inland waterways based on deep reinforcement learning

Related references

Note: Only part of the references are listed.
Article Transportation Science & Technology

Modified DDPG car-following model with a real-world human driving experience with CARLA simulator

Dianzhao Li et al.

Summary: In this study, a two-stage deep reinforcement learning (DRL) method is proposed to train a car-following agent in autonomous driving. By leveraging real-world human driving experience, the policy is modified to achieve better performance than pure DRL agents. Various driving scenarios were designed for evaluation, showing that the agent becomes more efficient and reasonable after extracting good behavior from human drivers, making it suitable for human-robot interaction traffic.

TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES (2023)

Article Multidisciplinary Sciences

Outracing champion Gran Turismo drivers with deep reinforcement learning

Peter R. Wurman et al.

Summary: This study explains how agents for Gran Turismo were trained to compete with the world's best e-sports drivers by combining deep reinforcement learning algorithms with mixed-scenario training to learn an integrated control policy that combines exceptional speed with impressive tactics. The agents were able to win a head-to-head competition against four of the world's best Gran Turismo drivers, showcasing the possibilities and challenges of using these techniques to control complex dynamical systems in domains where agents must respect imprecisely defined human norms.

NATURE (2022)

Article Engineering, Marine

Study of narrow waterways congestion based on automatic identification system (AIS) data: A case study of Houston Ship Channel

Masood Jafari Kang et al.

Summary: This article extends the definition of congestions indices to maritime transportation systems and proposes a methodology to measure these indices based on automatic identification system (AIS) data. The results show that the dwell time index (DTI) can better quantify waterway congestion and highlight severity in different segments for different types of vessels.

JOURNAL OF OCEAN ENGINEERING AND SCIENCE (2022)

Article Chemistry, Analytical

A Novel Reinforcement Learning Collision Avoidance Algorithm for USVs Based on Maneuvering Characteristics and COLREGs

Yunsheng Fan et al.

Summary: This paper investigates the problem of collision avoidance for unmanned surface vehicles (USVs) under the constraint of international regulations. A reinforcement learning collision avoidance algorithm that complies with USV maneuverability is proposed. The algorithm is tested in a marine simulation environment and shows a higher average reward.

SENSORS (2022)

Article Engineering, Civil

An Optimized Path Planning Method for Coastal Ships Based on Improved DDPG and DP

Yiquan Du et al.

Summary: This paper proposes an improved deep reinforcement learning path planning method, which achieves safe and economic path planning for ships through improving the reward function and optimizing the algorithm.

JOURNAL OF ADVANCED TRANSPORTATION (2021)

Article Economics

Managing ship lock congestion in an inland waterway: A bottleneck model with a service time window

Yao Deng et al.

Summary: This paper introduces a bottleneck model to manage ship lock congestion, explores different congestion tolling and administrative schemes, and finds that MST can effectively substitute tolling schemes in most cases, even outperforming them under certain conditions. Combining MST with tolling schemes can increase efficiency, but caution should be taken when the benefits of MST are marginal or zero in certain cases.

TRANSPORT POLICY (2021)

Article Transportation Science & Technology

Safe, efficient, and comfortable velocity control based on reinforcement learning for autonomous driving

Meixin Zhu et al.

TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES (2020)

Article Multidisciplinary Sciences

Autonomous navigation of stratospheric balloons using reinforcement learning

Marc G. Bellemare et al.

NATURE (2020)

Article Transportation Science & Technology

Waterborne platooning in the short sea shipping sector

A. Colling et al.

TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES (2020)

Article Engineering, Ocean

Automatic collision avoidance of multiple ships based on deep Q-learning

Haiqing Shen et al.

APPLIED OCEAN RESEARCH (2019)

Article Engineering, Marine

COLREGs-compliant multiship collision avoidance based on deep reinforcement learning

Luman Zhao et al.

OCEAN ENGINEERING (2019)

Article Engineering, Marine

Vessel traffic scheduling method for the controlled waterways in the upper Yangtze River

Shan Liang et al.

OCEAN ENGINEERING (2019)

Article Transportation Science & Technology

Distributed model predictive control for vessel train formations of cooperative multi-vessel systems

Linying Chen et al.

TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES (2018)

Article Transportation Science & Technology

Dissipation of stop-and-go waves via control of autonomous vehicles: Field experiments

Raphael E. Stern et al.

TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES (2018)

Article Environmental Studies

The waterway ship scheduling problem

Eduardo Lalla-Ruiz et al.

TRANSPORTATION RESEARCH PART D-TRANSPORT AND ENVIRONMENT (2018)

Article Transportation Science & Technology

Human-like autonomous car-following model with deep reinforcement learning

Meixin Zhu et al.

TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES (2018)

Article Multidisciplinary Sciences

Human-level control through deep reinforcement learning

Volodymyr Mnih et al.

NATURE (2015)

Article Physics, Multidisciplinary

Comparing numerical integration schemes for time-continuous car-following models

Martin Treiber et al.

PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS (2015)

Article Operations Research & Management Science

Waiting time approximation in single-class queueing systems with multiple types of interruptions: modeling congestion at waterways entrances

Oezgecan S. Uluscu et al.

ANNALS OF OPERATIONS RESEARCH (2009)