期刊
IEEE TRANSACTIONS ON SMART GRID
卷 6, 期 4, 页码 1795-1805出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSG.2015.2393059
关键词
Demand-side management; electric vehicles (EVs); reinforcement learning (RL); stochastic programming (SP)
资金
- DistriNet Research Group of the Department of Computer Science, Catholic University of Leuven
- Vlaamse Instelling Voor Technologisch Onderzoek, Flemish Institute for Technological Research
- Electrical Energy and Computing Architectures Research Group of the Department of Electrical Engineering, Catholic University of Leuven
- Department of Electrical Engineering and Computer Science, University of Liege
- Institute for the Promotion of Innovation by Science and Technology in Flanders
This paper addresses the problem of defining a day-ahead consumption plan for charging a fleet of electric vehicles (EVs), and following this plan during operation. A challenge herein is the beforehand unknown charging flexibility of EVs, which depends on numerous details about each EV (e.g., plug-in times, power limitations, battery size, power curve, etc.). To cope with this challenge, EV charging is controlled during opertion by a heuristic scheme, and the resulting charging behavior of the EV fleet is learned by using batch mode reinforcement learning. Based on this learned behavior, a cost-effective day-ahead consumption plan can be defined. In simulation experiments, our approach is benchmarked against a multistage stochastic programming solution, which uses an exact model of each EVs charging flexibility. Results show that our approach is able to find a day-ahead consumption plan with comparable quality to the benchmark solution, without requiring an exact day-ahead model of each EVs charging flexibility.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据