4.8 Article

Deep-Learning-Based Joint Optimization of Renewable Energy Storage and Routing in Vehicular Energy Network

Journal

IEEE INTERNET OF THINGS JOURNAL
Volume 7, Issue 7, Pages 6229-6241

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JIOT.2020.2966660

Keywords

Energy storage; Renewable energy sources; Routing; Charging stations; Internet of Things; Resource management; Roads; Deep learning; long short-term memory (LSTM); optimization; routing; vehicular energy networks (VENs)

Funding

  1. Natural Science Foundation of China [61601157]

Ask authors/readers for more resources

Recent development in renewable energy-enabled electric vehicles (EVs) has posed challenges to the stability and efficiency of the vehicular energy network (VEN), which is a concrete implementation of Internet of Things (IoT) in energy and vehicular networks. In this article, we study a VEN with time-varying point-to-point traffic flow and adjustable energy storage capacity at stations. The goal is to jointly optimize the routing and dynamic storage allocation of renewable energy so as to maximize the efficiency of plant-to-station energy transferring. We first adopt a time-expanded topology graph to describe the scenario and model it as a maximum flow problem. Next, we incorporate routing in our methodology and derive a joint energy storage capacity and route planning method based on linear programming. Then, the problem is extended to a more general case, where the traffic pattern of the VEN is unknown. We apply the long short-term memory model, a deep learning method, to predict the traffic pattern and utilize the concept of reinforcement learning to iteratively improve the prediction accuracy. To evaluate the performance, we implement our method first on regular buses, then extend to EVs based on the real trace data from the PeMS system in California. The simulation results show that the joint optimization can achieve near-optimal performance and performs well even in case of high rate of missing traffic information, with the help of deep reinforcement learning.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available