4.7 Article

An Adaptive Hierarchical Energy Management Strategy for Hybrid Electric Vehicles Combining Heuristic Domain Knowledge and Data-Driven Deep Reinforcement Learning

Related references

Note: Only part of the references are listed.
Article Thermodynamics

Hierarchical reinforcement learning based energy management strategy for hybrid electric vehicle

Chunyang Qi et al.

Summary: This research introduces a novel reinforcement learning-based deep Q-learning algorithm for the energy management strategy of HEVs. The proposed method not only addresses the issue of sparse reward during training, but also achieves optimal power distribution. Additionally, the hierarchical structure of the algorithm enhances exploration of the vehicle environment, leading to improved training efficiency and reduced fuel consumption.

ENERGY (2022)

Article Engineering, Electrical & Electronic

Heuristic Energy Management Strategy of Hybrid Electric Vehicle Based on Deep Reinforcement Learning With Accelerated Gradient Optimization

Guodong Du et al.

Summary: A heuristic deep reinforcement learning control strategy is proposed for energy management of series hybrid electric vehicles, utilizing methods such as adaptive moment estimation and experience replay for improved efficiency. The strategy shows faster training and better fuel economy compared to existing methods, demonstrating adaptability and stability across different driving cycles.

IEEE TRANSACTIONS ON TRANSPORTATION ELECTRIFICATION (2021)

Review Automation & Control Systems

The dilemma of PID tuning

Oluwasegun Ayokunle Somefun et al.

Summary: PID control law is widely used in automatic feedback control and learning tasks on dynamical systems, but tuning this algorithm for accurate control is a challenging NP-Hard Problem. The dilemma of complexity and cost associated with tuning PID parameters prompts the need for continuous improvement in this field. By classifying PID tuning methods and proposing solutions, it is hoped that significant advancements can be made in PID control design.

ANNUAL REVIEWS IN CONTROL (2021)

Article Thermodynamics

A novel energy management strategy of hybrid electric vehicle via an improved TD3 deep reinforcement learning

Jianhao Zhou et al.

Summary: In this study, a deep reinforcement learning algorithm, TD3, is used to develop an intelligent energy management strategy (EMS) for hybrid electric vehicles, including a local controller (LC) and a hybrid experience replay method (HER). The improved TD3-based EMS shows the best fuel optimization performance, fastest convergence speed, and highest robustness under different driving cycles.

ENERGY (2021)

Article Engineering, Electrical & Electronic

Learning Time Reduction Using Warm-Start Methods for a Reinforcement Learning-Based Supervisory Control in Hybrid Electric Vehicle Applications

Bin Xu et al.

Summary: This study aims to reduce the learning iterations of Q-learning in HEV application utilizing warm-start methods, resulting in significant improvements compared to traditional cold-start methods.

IEEE TRANSACTIONS ON TRANSPORTATION ELECTRIFICATION (2021)

Article Engineering, Electrical & Electronic

Ensemble Reinforcement Learning-Based Supervisory Control of Hybrid Electric Vehicle for Fuel Economy Improvement

Bin Xu et al.

IEEE TRANSACTIONS ON TRANSPORTATION ELECTRIFICATION (2020)

Article Automation & Control Systems

Enhanced Q-learning for real-time hybrid electric vehicle energy management with deterministic rule

Yang Li et al.

MEASUREMENT & CONTROL (2020)

Article Engineering, Electrical & Electronic

Cross-Type Transfer for Deep Reinforcement Learning Based Hybrid Electric Vehicle Energy Management

Renzong Lian et al.

IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY (2020)

Article Computer Science, Information Systems

Transfer Deep Reinforcement Learning-Enabled Energy Management Strategy for Hybrid Tracked Vehicle

Xiaowei Guo et al.

IEEE ACCESS (2020)

Article Multidisciplinary Sciences

Hierarchical motor control in mammals and machines

Josh Merel et al.

NATURE COMMUNICATIONS (2019)

Article Automation & Control Systems

A Bi-Level Control for Energy Efficiency Improvement of a Hybrid Tracked Vehicle

Teng Liu et al.

IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS (2018)

Article Chemistry, Multidisciplinary

Energy Management Strategy for a Hybrid Electric Vehicle Based on Deep Reinforcement Learning

Yue Hu et al.

APPLIED SCIENCES-BASEL (2018)

Article Engineering, Electrical & Electronic

Energy Management in Plug-in Hybrid Electric Vehicles: Recent Progress and a Connected Vehicles Perspective

Clara Marina Martinez et al.

IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY (2017)

Article Automation & Control Systems

Reinforcement Learning Optimized Look-Ahead Energy Management of a Parallel Hybrid Electric Vehicle

Teng Liu et al.

IEEE-ASME TRANSACTIONS ON MECHATRONICS (2017)

Article Multidisciplinary Sciences

Mastering the game of Go without human knowledge

David Silver et al.

NATURE (2017)

Article Multidisciplinary Sciences

Mastering the game of Go with deep neural networks and tree search

David Silver et al.

NATURE (2016)

Review Green & Sustainable Science & Technology

A review on hybrid electric vehicles architecture and energy management strategies

M. F. M. Sabri et al.

RENEWABLE & SUSTAINABLE ENERGY REVIEWS (2016)

Article Automation & Control Systems

Reinforcement Learning of Adaptive Energy Management With Transition Probability for a Hybrid Electric Tracked Vehicle

Teng Liu et al.

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS (2015)

Article Engineering, Electrical & Electronic

Energy Management for a Power-Split Plug-in Hybrid Electric Vehicle Based on Dynamic Programming and Neural Networks

Zheng Chen et al.

IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY (2014)

Article Chemistry, Physical

Fuel economy evaluation of fuel cell hybrid vehicles based on equivalent fuel consumption

C. H. Zheng et al.

INTERNATIONAL JOURNAL OF HYDROGEN ENERGY (2012)

Article Automation & Control Systems

Optimal Control of Hybrid Electric Vehicles Based on Pontryagin's Minimum Principle

Namwook Kim et al.

IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY (2011)