4.8 Article

Cybertwin-driven resource allocation using deep reinforcement learning in 6G-enabled edge environment

Publisher

ELSEVIER
DOI: 10.1016/j.jksuci.2022.02.005

Keywords

Cybertwin; 6G; Resource allocation; Computation offloading; Deep reinforcement learning

Ask authors/readers for more resources

The emergence of sixth-generation wireless communication technology has led to the rapid increase of real-time applications. Edge computing driven by Cybertwin is proposed as a promising solution to meet user demand, but it comes with challenges. This work proposes a joint resource allocation and computation offloading scheme using deep reinforcement learning in Cybertwin-enabled 6G wireless networks. The results show that the proposed scheme can reduce latency and energy consumption while improving task completion rate compared to traditional methods.
The recent emergence of sixth-generation (6G) enabled wireless communication technology has resulted in the rapid proliferation of a wide range of real-time applications. These applications are highly data -computation intensive and generate huge data traffic. Cybertwin-driven edge computing emerges as a promising solution to satisfy massive user demand, but it also introduces new challenges. One of the most difficult challenges in edge networks is efficiently offloading tasks while managing computation, communication, and cache resources. Traditional statistical optimization methods are incapable of addressing the offloading problem in a dynamic edge computing environment. In this work, we propose a joint resource allocation and computation offloading scheme by integrating deep reinforcement learn-ing in Cybertwin enabled 6G wireless networks. The proposed system uses the potential of the MATD3 algorithm to provide QoS to end-users by minimizing the overall latency and energy consumption with better management of cache resources. As these edge resources are deployed in inaccessible locations, therefore, we employ secure authentication mechanism for Cybertwins. The proposed system is imple-mented in a simulated environment, and the results are calculated for different performance metrics with previous benchmark methodologies such as RRA, GRA, and MADDPG. The comparative analysis reveals that the proposed MATD3 reduces end-to-end latency and energy consumption by 13.8% and 12.5% respectively over MADDPG with a 4% increase in successful task completion.(c) 2022 The Authors. Published by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available