4.7 Article

Personalized Car-Following Control Based on a Hybrid of Reinforcement Learning and Supervised Learning

相关参考文献

注意:仅列出部分参考文献,下载原文获取全部文献信息。
Article Computer Science, Artificial Intelligence

Efficient Deep Reinforcement Learning With Imitative Expert Priors for Autonomous Driving

Zhiyu Huang et al.

Summary: This article presents a novel framework that incorporates human prior knowledge into deep reinforcement learning to improve sample efficiency and simplify reward function design. The proposed method achieves superior performance and significantly enhances sample efficiency in autonomous driving applications. The results demonstrate that using ensemble methods to estimate uncertainties and increasing the training sample size can improve training and testing performance, particularly for more challenging tasks.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2023)

Article Computer Science, Information Systems

Supervised pre-training for improved stability in deep reinforcement learning

Sooyoung Jang et al.

Summary: With the recent advancements in deep learning, deep reinforcement learning (DRL) technology has been extensively studied, leading to improved performance and expanded applications. However, it has been observed that the performance of DRL is highly sensitive to design choices, such as neural network initialization, making it difficult to achieve stable performance and reproducibility. To address this, we propose a supervised pre-training method for both the policy and value networks, focusing on maximizing initial entropy and biasing the distribution to a specific value. Our experiments on tasks with discrete action space demonstrate the effectiveness of the proposed method in improving stability and performance.

ICT EXPRESS (2023)

Article Engineering, Civil

Dynamic Driving Risk Potential Field Model Under the Connected and Automated Vehicles Environment and Its Application in Car-Following Modeling

Linheng Li et al.

Summary: This paper proposes a new dynamic driving risk potential field model that considers the dynamic effect of a vehicle's acceleration and steering angle in the connected and automated vehicles (CAVs) environment. The simulation results show that the model accurately describes car-following behavior and outperforms other classical models in frequent oscillation phases. Additionally, the model is successfully used to deduce safety conditions for vehicle lane-changing.

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS (2022)

Article Engineering, Civil

Deep Reinforcement Learning for Intelligent Transportation Systems: A Survey

Ammar Haydari et al.

Summary: Latest technological improvements have enhanced the quality of transportation. The emergence of new data-driven approaches has opened up new research directions for control-based systems in various domains, including transportation, robotics, IoT, and power systems. This paper presents a survey of traffic control applications based on deep reinforcement learning (RL). It extensively discusses different problem formulations, RL parameters, and simulation environments for traffic signal control (TSC) applications. The survey also covers autonomous driving applications studied with deep RL models, categorizing them based on application types, control models, and algorithms studied. The paper concludes with a discussion on challenges and open questions in deep RL-based transportation applications.

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS (2022)

Article Computer Science, Artificial Intelligence

Supervised assisted deep reinforcement learning for emergency voltage control of power systems

Xiaoshuang Li et al.

Summary: This paper proposes a novel hybrid emergency voltage control method that combines expert experience and machine intelligence. The expert experience is extracted through a behavioral cloning model, and the deep reinforcement learning method is applied to discover and learn new knowledge autonomously. Experiments validate the effectiveness and applicability of the proposed method.

NEUROCOMPUTING (2022)

Article Computer Science, Information Systems

Automatic Weight Determination in Model Predictive Control for Personalized Car-Following Control

Wonteak Lim et al.

Summary: Car-following control is a fundamental application of autonomous driving and Model Predictive Control (MPC) is a powerful method for this. However, determining the optimal weight factors for MPC is not straightforward. To solve this, we proposed an automatic tuning method based on personal driving data, which reduces the effort and time required for engineers.

IEEE ACCESS (2022)

Article Engineering, Civil

Platoon Trajectories Generation: A Unidirectional Interconnected LSTM-Based Car-Following Model

Yangxin Lin et al.

Summary: Car-following models have been widely applied and achieved remarkable success in traffic engineering. However, the accuracy of traffic micro-simulation at the platoon level, especially during traffic oscillations, needs improvement. This study proposes a new trajectory generation approach that generates platoon-level trajectories based on the first leading vehicle's trajectory. Through analysis, it is found that the error comes from the training method and the model structure. Two improvements are made to the traditional LSTM-based car-following model, which significantly reduces error in temporal-spatial propagation. Compared to the traditional model, the proposed model has a 40% lower error rate.

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS (2022)

Article Engineering, Civil

Deep Reinforcement Learning for Autonomous Driving: A Survey

B. Ravi Kiran et al.

Summary: This paper summarizes deep reinforcement learning algorithms, provides a taxonomy of automated driving tasks, discusses key computational challenges in real world deployment of autonomous driving agents, and explores adjacent domains as well as the role of simulators in training agents.

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS (2022)

Article Engineering, Civil

A Spatiotemporal Bidirectional Attention-Based Ride-Hailing Demand Prediction Model: A Case Study in Beijing During COVID-19

Ziheng Huang et al.

Summary: This study introduces a deep learning model MOS-BiAtten for predicting urban ride-hailing demand, which utilizes multi-head spatial attention mechanism and bidirectional attention mechanism. Experimental results demonstrate its superior performance on the Beijing dataset during COVID-19.

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS (2022)

Proceedings Paper Automation & Control Systems

Improved Deep Reinforcement Learning with Expert Demonstrations for Urban Autonomous Driving

Haochen Liu et al.

Summary: This paper proposes a novel learning-based approach that combines deep reinforcement learning and imitation learning for vehicle motion control in autonomous driving scenarios. The method utilizes a soft actor-critic structure and modifies the learning process to achieve a balance between maximizing rewards and imitating expert demonstrations, resulting in improved performance and efficiency.

2022 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV) (2022)

Article Engineering, Electrical & Electronic

Deep Inverse Reinforcement Learning for Behavior Prediction in Autonomous Driving: Accurate Forecasts of Vehicle Motion

Tharindu Fernando et al.

Summary: The article highlights the importance of accurate behavior modeling in autonomous driving, analyzing the potential and progress of deep inverse reinforcement learning in this field, and providing quantitative and qualitative evaluations to support the observations. Despite recent successes in the field of D-IRL, its application to modeling behavior in autonomous driving remains largely unexplored.

IEEE SIGNAL PROCESSING MAGAZINE (2021)

Article Engineering, Civil

Learning From Naturalistic Driving Data for Human-Like Autonomous Highway Driving

Donghao Xu et al.

Summary: The study proposes a method of learning cost parameters of a motion planner from naturalistic driving data to achieve human-like driving behavior in autonomous vehicles. The motion planner incorporates incentive of behavior decision like a human driver, and promising results are achieved in experiments conducted with respect to both lane change decision and motion planning.

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS (2021)

Article Engineering, Civil

Combined Hierarchical Learning Framework for Personalized Automatic Lane-Changing

Bing Zhu et al.

Summary: There have been significant advancements in automatic driving technology, requiring personalized designs to reduce conflicts between drivers and vehicles. A combined hierarchy learning framework is proposed in this paper, utilizing both data- and mechanism-based methods to achieve reliable and stable automatic driving.

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS (2021)

Article Engineering, Civil

Self-Learning Optimal Cruise Control Based on Individual Car-Following Style

Hongqing Chu et al.

Summary: This study developed an optimal cruise controller that automatically adapts to individual car-following styles. By using a learning algorithm to quantify closeness to predefined styles, the controller was able to determine and adapt to a proper style for specific drivers. Simulation and experimental tests showed that the controller's behavior was closer to that of human drivers than factory-installed ACC systems.

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS (2021)

Article Engineering, Civil

A Survey of Deep Learning Applications to Autonomous Vehicle Control

Sampo Kuutti et al.

Summary: Deep learning methods have shown great promise in providing excellent performance for complex and non-linear control problems, as well as generalising previously learned rules to new scenarios. While there have been important advancements in using deep learning for vehicle control, there are still challenges to overcome, such as computation, architecture selection, goal specification, generalisation, verification and validation, as well as safety.

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS (2021)

Article Engineering, Civil

Human-Like Decision Making for Autonomous Driving: A Noncooperative Game Theoretic Approach

Peng Hang et al.

Summary: This paper presents a human-like decision making framework for AVs considering the coexistence of human-driven vehicles and autonomous vehicles in the future. Different driving styles, social interaction characteristics, game theory, and model predictive control are applied for decision making in AVs. Testing scenarios of lane change show that game theoretic approaches can provide reasonable human-like decision making, with the Stackelberg game theory approach reducing the cost value by over 20% under normal driving style compared to the Nash equilibrium approach.

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS (2021)

Article Transportation Science & Technology

About calibration of car-following dynamics of automated and human-driven vehicles: Methodology, guidelines and codes

Vincenzo Punzo et al.

Summary: The study proposes a methodology based on Pareto-efficiency and indifference curves to compare and rank objective functions for car-following dynamics. Consistent results from calibration experiments have led to the recommendation of a robust guideline for car-following calibration, including suggestions on what settings to avoid and which ones to adopt. Sharing of codes and data from the study aims to promote transparent and reproducible research.

TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES (2021)

Article Computer Science, Information Systems

Pre-training with asynchronous supervised learning for reinforcement learning based autonomous driving

Yunpeng Wang et al.

Summary: Reinforcement learning performs well in designing autonomous driving systems, but faces the challenge of poor initial performance in practical implementation.

FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING (2021)

Article Computer Science, Artificial Intelligence

Comparison of Deep Reinforcement Learning and Model Predictive Control for Adaptive Cruise Control

Yuan Lin et al.

Summary: This study compared the performance of Deep Reinforcement Learning (DRL) and Model Predictive Control (MPC) in Adaptive Cruise Control design, finding that the two are comparable when testing data falls within the training range, but DRL performance degrades when the testing data is outside the training range.

IEEE TRANSACTIONS ON INTELLIGENT VEHICLES (2021)

Article Engineering, Electrical & Electronic

Deterministic Promotion Reinforcement Learning Applied to Longitudinal Velocity Control for Automated Vehicles

Yuxiang Zhang et al.

IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY (2020)

Article Transportation Science & Technology

Safe, efficient, and comfortable velocity control based on reinforcement learning for autonomous driving

Meixin Zhu et al.

TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES (2020)

Article Engineering, Civil

Introducing Electrified Vehicle Dynamics in Traffic Simulation

Yinglong He et al.

TRANSPORTATION RESEARCH RECORD (2020)

Article Engineering, Electrical & Electronic

Extracting Human-Like Driving Behaviors From Expert Driver Data Using Deep Learning

Kyle Sama et al.

IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY (2020)

Article Engineering, Civil

Learning the Car-following Behavior of Drivers Using Maximum Entropy Deep Inverse Reinforcement Learning

Yang Zhou et al.

JOURNAL OF ADVANCED TRANSPORTATION (2020)

Article Engineering, Electrical & Electronic

Personalized Adaptive Cruise Control Based on Online Driving Style Recognition Technology and Model Predictive Control

Bingzhao Gao et al.

IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY (2020)

Article Transportation Science & Technology

A sequence to sequence learning based car-following model for multi-step predictions considering reaction delay

Lijing Ma et al.

TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES (2020)

Article Computer Science, Artificial Intelligence

Brain-Inspired Cognitive Model With Attention for Self-Driving Cars

Shitao Chen et al.

IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS (2019)

Article Engineering, Civil

A Novel Car-Following Control Model Combining Machine Learning and Kinematics Models for Automated Vehicles

Da Yang et al.

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS (2019)

Article Physics, Multidisciplinary

Long memory is important: A test study on deep-learning based car-following model

Xiao Wang et al.

PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS (2019)

Article Transportation Science & Technology

Typical-driving-style-oriented Personalized Adaptive Cruise Control design based on human driving data

Bing Zhu et al.

TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES (2019)

Article Engineering, Civil

MFC Free-Flow Model: Introducing Vehicle Dynamics in Microsimulation

Michail Makridis et al.

TRANSPORTATION RESEARCH RECORD (2019)

Article Computer Science, Information Systems

Fusion Modeling Method of Car-Following Characteristics

Yufang Li et al.

IEEE ACCESS (2019)

Article Engineering, Civil

Capturing Car-Following Behaviors by Deep Learning

Xiao Wang et al.

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS (2018)

Article Transportation Science & Technology

Modeling car-following behavior on urban expressways in Shanghai: A naturalistic driving study

Meixin Zhu et al.

TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES (2018)

Article Engineering, Multidisciplinary

A Hardware Platform Framework for an Intelligent Vehicle Based on a Driving Brain

Deyi Li et al.

ENGINEERING (2018)

Article Transportation Science & Technology

A car-following model considering asymmetric driving behavior based on long short-term memory neural networks

Xiuling Huang et al.

TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES (2018)

Article Computer Science, Artificial Intelligence

Driver Behavior Characteristics Identification Strategies Based on Bionic Intelligent Algorithms

Bing Zhu et al.

IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS (2018)

Article Transportation Science & Technology

Human-like autonomous car-following model with deep reinforcement learning

Meixin Zhu et al.

TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES (2018)

Article Engineering, Civil

Analysis of Recurrent Neural Networks for Probabilistic Modeling of Driver Behavior

Jeremy Morton et al.

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS (2017)

Article Environmental Studies

Validation of the Rakha-Pasumarthy-Adjerid car-following model for vehicle fuel consumption and emission estimation applications

Jinghui Wang et al.

TRANSPORTATION RESEARCH PART D-TRANSPORT AND ENVIRONMENT (2017)

Article Transportation Science & Technology

A binary decision model for discretionary lane changing move based on fuzzy inference system

Esmaeil Balal et al.

TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES (2016)

Article Computer Science, Interdisciplinary Applications

The new car following model considering vehicle dynamics influence and numerical simulation

Dihua Sun et al.

INTERNATIONAL JOURNAL OF MODERN PHYSICS C (2015)

Article Engineering, Civil

Vehicle Dynamics Model for Estimating Typical Vehicle Accelerations

Karim Fadhloun et al.

TRANSPORTATION RESEARCH RECORD (2015)

Article Transportation Science & Technology

Incorporating human-factors in car-following models: A review of recent developments and research needs

Mohammad Saifuzzaman et al.

TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES (2014)

Article Engineering, Civil

An Adaptive Longitudinal Driving Assistance System Based on Driver Characteristics

Jianqiang Wang et al.

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS (2013)

Article Computer Science, Artificial Intelligence

A supervised Actor-Critic approach for adaptive cruise control

Dongbin Zhao et al.

SOFT COMPUTING (2013)

Article Engineering, Civil

Cooperative Adaptive Cruise Control: A Reinforcement Learning Approach

Charles Desjardins et al.

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS (2011)

Article Physics, Fluids & Plasmas

Congested traffic states in empirical observations and microscopic simulations

M Treiber et al.

PHYSICAL REVIEW E (2000)