4.7 Article

Learning Time Reduction Using Warm-Start Methods for a Reinforcement Learning-Based Supervisory Control in Hybrid Electric Vehicle Applications

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TTE.2020.3019009

关键词

Hybrid electric vehicles; Supervisory control; Batteries; Engines; Torque; Force; Electronic countermeasures; Hybrid electric vehicle (HEV); learning time reduction; Q-learning; supervisory control

向作者/读者索取更多资源

This study aims to reduce the learning iterations of Q-learning in HEV application utilizing warm-start methods, resulting in significant improvements compared to traditional cold-start methods.
Reinforcement learning (RL) is gradually being implemented in the hybrid electric vehicle (HEV) supervisory control. Even though RL exhibits significant fuel consumption saving, the long learning time makes it hardly applicable in real-world vehicles. This study aims to reduce the learning iterations of Q-learning in HEV application utilizing warm-start methods. Different from previous studies, which initiated Q-learning with zero or random Q values, this study initiates the Q-learning with different supervisory controls, and the detailed analysis is given. The results show that the proposed warm-start Q-learning requires 68.8% fewer iterations than cold-start Q-learning and improves 10%-16% MPG compared with equivalent consumption minimization strategy control. The results of this study can be used to facilitate the deployment of RL in vehicle applications.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据