4.7 Article

Scheduling the Operation of a Connected Vehicular Network Using Deep Reinforcement Learning

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TITS.2018.2832219

Keywords

Internet of Vehicles; deep reinforcement learning; scheduling

Funding

  1. NSERC
  2. Concordia University

Ask authors/readers for more resources

Driven by the expeditious evolution of the Internet of Things, the conventional vehicular ad hoc networks will progress toward the Internet of Vehicles (IoV). With the rapid development of computation and communication technologies, IoV promises huge commercial interest and research value, thereby attracting a large number of companies and researchers. In an effort to satisfy the driver's well-being and demand for continuous connectivity in the IoV era, this paper addresses both safety and quality-of-service (QoS) concerns in a green, balanced, connected, and efficient vehicular network. Using the recent advances in training deep neural networks, we exploit the deep reinforcement learning model, namely deep Q-network, which learns a scheduling policy from high-dimensional inputs corresponding to the current characteristics of the underlying model. The realized policy serves to extend the lifetime of the battery-powered vehicular network while promoting a safe environment that meets acceptable QoS levels. Our presented deep reinforcement learning model is found to outperform several scheduling benchmarks in terms of completed request percentage (10-25%), mean request delay (10-15%), and total network lifetime (5-65%).

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available