4.7 Article

Optimal Policy Characterization Enhanced Actor-Critic Approach for Electric Vehicle Charging Scheduling in a Power Distribution Network

Journal

IEEE TRANSACTIONS ON SMART GRID
Volume 12, Issue 2, Pages 1416-1428

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSG.2020.3028470

Keywords

Electric vehicle charging; Optimal scheduling; Stochastic processes; Distribution networks; Reinforcement learning; Solar power generation; Dynamic programming; deep reinforcement learning; electric vehicle charging; actor-critic approach; power distribution network

Funding

  1. Shun Hing Institute of Advanced Engineering, the Chinese University of Hong Kong [RNE-p5-19]

Ask authors/readers for more resources

In this study, scheduling large-scale electric vehicle charging in a power distribution network under random renewable generation and electricity prices is explored. The nodal multi-target (NMT) characterization of the optimal scheduling policy is established to reduce the dimensionality of neural network outputs without compromising optimality. The proposed SAC + NMT approach outperforms existing deep reinforcement learning methods in numerical experiments on the IEEE 37-node test feeder.
We study the scheduling of large-scale electric vehicle (EV) charging in a power distribution network under random renewable generation and electricity prices. The problem is formulated as a stochastic dynamic program with unknown state transition probability. To mitigate the curse of dimensionality, we establish the nodal multi-target (NMT) characterization of the optimal scheduling policy: all EVs with the same deadline at the same bus should be charged to approach a single target of remaining energy demand. We prove that the NMT characterization is optimal under arbitrarily random system dynamics. To adaptively learn the dynamics of system uncertainty, we propose a model-free soft-actor-critic (SAC) based method to determine the target levels for the characterized NMT policy. The proposed SAC + NMT approach significantly outperforms existing deep reinforcement learning methods (in our numerical experiments on the IEEE 37-node test feeder), as the established NMT characterization sharply reduces the dimensionality of neural network outputs without loss of optimality.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available