4.7 Article

Reinforcement Learning of Heuristic EV Fleet Charging in a Day-Ahead Electricity Market

期刊

IEEE TRANSACTIONS ON SMART GRID
卷 6, 期 4, 页码 1795-1805

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSG.2015.2393059

关键词

Demand-side management; electric vehicles (EVs); reinforcement learning (RL); stochastic programming (SP)

资金

  1. DistriNet Research Group of the Department of Computer Science, Catholic University of Leuven
  2. Vlaamse Instelling Voor Technologisch Onderzoek, Flemish Institute for Technological Research
  3. Electrical Energy and Computing Architectures Research Group of the Department of Electrical Engineering, Catholic University of Leuven
  4. Department of Electrical Engineering and Computer Science, University of Liege
  5. Institute for the Promotion of Innovation by Science and Technology in Flanders

向作者/读者索取更多资源

This paper addresses the problem of defining a day-ahead consumption plan for charging a fleet of electric vehicles (EVs), and following this plan during operation. A challenge herein is the beforehand unknown charging flexibility of EVs, which depends on numerous details about each EV (e.g., plug-in times, power limitations, battery size, power curve, etc.). To cope with this challenge, EV charging is controlled during opertion by a heuristic scheme, and the resulting charging behavior of the EV fleet is learned by using batch mode reinforcement learning. Based on this learned behavior, a cost-effective day-ahead consumption plan can be defined. In simulation experiments, our approach is benchmarked against a multistage stochastic programming solution, which uses an exact model of each EVs charging flexibility. Results show that our approach is able to find a day-ahead consumption plan with comparable quality to the benchmark solution, without requiring an exact day-ahead model of each EVs charging flexibility.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据