Journal
IEEE TRANSACTIONS ON INDUSTRY APPLICATIONS
Volume 56, Issue 5, Pages 5811-5823Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIA.2020.2990096
Keywords
Electric vehicle charging; Reinforcement learning; Distribution networks; Charging stations; Power system stability; Uncertainty; Markov processes; Electric vehicle (EV); Markov decision process (MDP); optimal charging strategy; reinforcement learning (RL)
Funding
- National Key Research and Development Program of China [2016YFB0901900]
- National Natural Science Foundation of China [51977166, U1766215]
- Natural Science Foundation of Shaanxi Province [2020KW-022]
- China Postdoctoral Science Foundation [2017T100748]
Ask authors/readers for more resources
Electric vehicles (EVs) have rapidly developed in recent years and their penetration has also significantly increased, which, however, brings new challenges to power systems. Due to their stochastic behaviors, the improper charging strategies for EVs may violate the voltage security region. To address this problem, an optimal EV charging strategy in a distribution network is proposed to maximize the profit of the distribution system operators while satisfying all the physical constraints. When dealing with the uncertainties from EVs, a Markov decision process model is built to characterize the time series of the uncertainties, and then the deep deterministic policy gradient based reinforcement learning technique is utilized to analyze the impact of uncertainties on the charging strategy. Finally, numerical results verify the effectiveness of the proposed method.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available