4.7 Article

REINFORCEMENT LEARNING FOR RESOURCE PROVISIONING IN THE VEHICULAR CLOUD

Journal

IEEE WIRELESS COMMUNICATIONS
Volume 23, Issue 4, Pages 128-135

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/MWC.2016.7553036

Keywords

-

Funding

  1. NPRP grant from Qatar National Research Fund (Qatar Foundation)

Ask authors/readers for more resources

This article presents a concise view of vehicular clouds that incorporates various vehicular cloud models that have been proposed to date. Essentially, they all extend the traditional cloud and its utility computing functionalities across the entities in the vehicular ad hoc network. These entities include fixed roadside units, onboard units embedded in the vehicle, and personal smart devices of drivers and passengers. Cumulatively, these entities yield abundant processing, storage, sensing, and communication resources. However, vehicular clouds require novel resource provisioning techniques that can address the intrinsic challenges of dynamic demands for the resources and stringent QoS requirements. In this article, we show the benefits of reinforcement-learning-based techniques for resource provisioning in the vehicular cloud. The learning techniques can perceive long-term benefits and are ideal for minimizing the overhead of resource provisioning for vehicular clouds.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available