4.8 Article

Dynamic pricing and energy management for profit maximization in multiple smart electric vehicle charging stations: A privacy-preserving deep reinforcement learning approach

期刊

APPLIED ENERGY
卷 304, 期 -, 页码 -

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.apenergy.2021.117754

关键词

Electric vehicle charging station; Electric vehicle; Deep reinforcement learning; Federated reinforcement learning; Dynamic pricing; Profit maximization

资金

  1. Basic Science Research Program through the National Research Foundation of Korea (NRF) - Ministry of Education [2020R1F1A1049314]
  2. Human Resources Development of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) - Korea government Ministry of Trade, Industry and Energy [20204030200090]
  3. National Research Foundation of Korea [2020R1F1A1049314] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)

向作者/读者索取更多资源

A privacy-preserving distributed deep reinforcement learning (DRL) framework is proposed to maximize the profits of smart EVCSs without sharing EVCS data. Numerical examples demonstrate the effectiveness of the proposed approach under varying conditions.
Profit maximization of electric vehicle charging station (EVCS) operation yields an increasing investment for the deployment of EVCSs, thereby increasing the penetration of electric vehicles (EVs) and supporting high quality charging service to EV users. However, existing model-based approaches for profit maximization of EVCSs may exhibit poor performance owing to the underutilization of massive data and inaccurate modeling of EVCS operation in a dynamic environment. Furthermore, the existing approaches can be vulnerable to adversaries that abuse private EVCS operation data for malicious purposes. To resolve these limitations, we propose a privacy-preserving distributed deep reinforcement learning (DRL) framework that maximizes the profits of multiple smart EVCSs integrated with photovoltaic and energy storage systems under a dynamic pricing strategy. In the proposed framework, DRL agents using the soft actor-critic method determine the schedules of the profitable selling price and charging/discharging energy for EVCSs. To preserve the privacy of EVCS operation data, a federated reinforcement learning method is adopted in which only the local and global neural network models of the DRL agents are exchanged between the DRL agents at the EVCSs and the global agent at the central server without sharing EVCS data. Numerical examples demonstrate the effectiveness of the proposed approach in terms of convergence of the training curve for the DRL agent, adaptive profitable selling price, energy charging and discharging, sensitivity of the selling price factor, and varying weather conditions.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据