4.7 Article

Reinforcement Learning-Based Plug-in Electric Vehicle Charging With Forecasted Price

期刊

IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY
卷 66, 期 5, 页码 3674-3684

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TVT.2016.2603536

关键词

Cost reduction; demand response; plug-in electric vehicle (PEV); price prediction; reinforcement learning (RL); smart charging

向作者/读者索取更多资源

This paper proposes a novel demand response method that aims at reducing the long-term cost of charging the battery of an individual plug-in electric vehicle (PEV). The problem is cast as a daily decision-making problem for choosing the amount of energy to be charged in the PEV battery within a day. We model the problem as a Markov decision process (MDP) with unknown transition probabilities. Abatch reinforcement-learning (RL) algorithm is proposed for learning an optimum cost-reducing charging policy from a batch of transition samples and making cost-reducing charging decisions in new situations. In order to capture the day-to-day differences of electricity charging costs, the method makes use of actual electricity prices for the current day and predicted electricity prices for the following day. A Bayesian neural network is employed for predicting the electricity prices. For constructing the RL training dataset, we use historical prices. A linear-programming-based method is developed for creating a dataset of optimal charging decisions. Different charging scenarios are simulated for each day of the historical time frame using the set of past electricity prices. Simulation results using real-world pricing data demonstrate cost savings of 10%-50% for the PEV owner when using the proposed charging method.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据