4.8 Article

Dynamic pricing and energy management for profit maximization in multiple smart electric vehicle charging stations: A privacy-preserving deep reinforcement learning approach

Journal

APPLIED ENERGY
Volume 304, Issue -, Pages -

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.apenergy.2021.117754

Keywords

Electric vehicle charging station; Electric vehicle; Deep reinforcement learning; Federated reinforcement learning; Dynamic pricing; Profit maximization

Funding

  1. Basic Science Research Program through the National Research Foundation of Korea (NRF) - Ministry of Education [2020R1F1A1049314]
  2. Human Resources Development of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) - Korea government Ministry of Trade, Industry and Energy [20204030200090]
  3. National Research Foundation of Korea [2020R1F1A1049314] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)

Ask authors/readers for more resources

A privacy-preserving distributed deep reinforcement learning (DRL) framework is proposed to maximize the profits of smart EVCSs without sharing EVCS data. Numerical examples demonstrate the effectiveness of the proposed approach under varying conditions.
Profit maximization of electric vehicle charging station (EVCS) operation yields an increasing investment for the deployment of EVCSs, thereby increasing the penetration of electric vehicles (EVs) and supporting high quality charging service to EV users. However, existing model-based approaches for profit maximization of EVCSs may exhibit poor performance owing to the underutilization of massive data and inaccurate modeling of EVCS operation in a dynamic environment. Furthermore, the existing approaches can be vulnerable to adversaries that abuse private EVCS operation data for malicious purposes. To resolve these limitations, we propose a privacy-preserving distributed deep reinforcement learning (DRL) framework that maximizes the profits of multiple smart EVCSs integrated with photovoltaic and energy storage systems under a dynamic pricing strategy. In the proposed framework, DRL agents using the soft actor-critic method determine the schedules of the profitable selling price and charging/discharging energy for EVCSs. To preserve the privacy of EVCS operation data, a federated reinforcement learning method is adopted in which only the local and global neural network models of the DRL agents are exchanged between the DRL agents at the EVCSs and the global agent at the central server without sharing EVCS data. Numerical examples demonstrate the effectiveness of the proposed approach in terms of convergence of the training curve for the DRL agent, adaptive profitable selling price, energy charging and discharging, sensitivity of the selling price factor, and varying weather conditions.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available