4.8 Article

Deep Reinforcement Learning for Adaptive Network Slicing in 5G for Intelligent Vehicular Systems and Smart Cities

Journal

IEEE INTERNET OF THINGS JOURNAL
Volume 9, Issue 1, Pages 222-235

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JIOT.2021.3091674

Keywords

Network slicing; Smart cities; Cloud computing; Resource management; Quality of service; Vehicle dynamics; Ultra reliable low latency communication; Deep reinforcement learning (DRL); edge computing; fog RAN; intelligent vehicular systems; network slicing

Funding

  1. U.S. National Science Foundation (NSF) [ECCS-2029875]

Ask authors/readers for more resources

This study focuses on the network slicing problem for intelligent vehicular systems and smart city applications. It proposes a solution based on fog radio access network and artificial intelligence, using deep reinforcement learning to adaptively learn the optimal slicing policy and achieve efficient resource allocation in dynamic environments.
Intelligent vehicular systems and smart city applications are the fastest growing Internet-of-Things (IoT) implementations at a compound annual growth rate of 30%. In view of the recent advances in IoT devices and the emerging new breed of IoT applications driven by artificial intelligence (AI), the fog radio access network (F-RAN) has been recently introduced for the fifth-generation (5G) wireless communications to overcome the latency limitations of cloud-RAN (C-RAN). We consider the network slicing problem of allocating the limited resources at the network edge (fog nodes) to vehicular and smart city users with heterogeneous latency and computing demands in dynamic environments. We develop a network slicing model based on a cluster of fog nodes (FNs) coordinated with an edge controller (EC) to efficiently utilize the limited resources at the network edge. For each service request in a cluster, the EC decides which FN to execute the task, i.e., locally serve the request at the edge, or to reject the task and refer it to the cloud. We formulate the problem as infinite-horizon Markov decision process (MDP) and propose a deep reinforcement learning (DRL) solution to adaptively learn the optimal slicing policy. The performance of the proposed DRL-based slicing method is evaluated by comparing it with other slicing approaches in dynamic environments and for different scenarios of design objectives. Comprehensive simulation results corroborate that the proposed DRL-based EC quickly learns the optimal policy through interaction with the environment, which enables adaptive and automated network slicing for efficient resource allocation in dynamic vehicular and smart city environments.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available