期刊
APPLIED ENERGY
卷 304, 期 -, 页码 -出版社
ELSEVIER SCI LTD
DOI: 10.1016/j.apenergy.2021.117754
关键词
Electric vehicle charging station; Electric vehicle; Deep reinforcement learning; Federated reinforcement learning; Dynamic pricing; Profit maximization
资金
- Basic Science Research Program through the National Research Foundation of Korea (NRF) - Ministry of Education [2020R1F1A1049314]
- Human Resources Development of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) - Korea government Ministry of Trade, Industry and Energy [20204030200090]
- National Research Foundation of Korea [2020R1F1A1049314] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)
A privacy-preserving distributed deep reinforcement learning (DRL) framework is proposed to maximize the profits of smart EVCSs without sharing EVCS data. Numerical examples demonstrate the effectiveness of the proposed approach under varying conditions.
Profit maximization of electric vehicle charging station (EVCS) operation yields an increasing investment for the deployment of EVCSs, thereby increasing the penetration of electric vehicles (EVs) and supporting high quality charging service to EV users. However, existing model-based approaches for profit maximization of EVCSs may exhibit poor performance owing to the underutilization of massive data and inaccurate modeling of EVCS operation in a dynamic environment. Furthermore, the existing approaches can be vulnerable to adversaries that abuse private EVCS operation data for malicious purposes. To resolve these limitations, we propose a privacy-preserving distributed deep reinforcement learning (DRL) framework that maximizes the profits of multiple smart EVCSs integrated with photovoltaic and energy storage systems under a dynamic pricing strategy. In the proposed framework, DRL agents using the soft actor-critic method determine the schedules of the profitable selling price and charging/discharging energy for EVCSs. To preserve the privacy of EVCS operation data, a federated reinforcement learning method is adopted in which only the local and global neural network models of the DRL agents are exchanged between the DRL agents at the EVCSs and the global agent at the central server without sharing EVCS data. Numerical examples demonstrate the effectiveness of the proposed approach in terms of convergence of the training curve for the DRL agent, adaptive profitable selling price, energy charging and discharging, sensitivity of the selling price factor, and varying weather conditions.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据