4.8 Article

Energy-Efficient Task Offloading and Resource Allocation via Deep Reinforcement Learning for Augmented Reality in Mobile Edge Networks

Journal

IEEE INTERNET OF THINGS JOURNAL
Volume 8, Issue 13, Pages 10843-10856

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JIOT.2021.3050804

Keywords

Task analysis; Servers; Optimization; Resource management; Energy consumption; Computational modeling; Heuristic algorithms; Augmented reality (AR); deep reinforcement learning; Internet of Things (IoT); mobile-edge computing (MEC); multiagent deep deterministic policy gradient (MADDPG); resource allocation; task offloading

Funding

  1. Shaanxi Key Research and Development Program [2018ZDCXL-GY-04-03-02]

Ask authors/readers for more resources

This article proposes an intelligent and efficient resource allocation and task offloading algorithm based on the deep reinforcement learning framework for augmented reality applications in mobile-edge computing systems in order to reduce energy consumption.
The augmented reality (AR) applications have been widely used in the field of Internet of Things (IoT) because of good immersion experience for users, but their ultralow delay demand and high energy consumption bring a huge challenge to the current communication system and terminal power. The emergence of mobile-edge computing (MEC) provides a good thinking to solve this challenge. In this article, we study an energy-efficient task offloading and resource allocation scheme for AR in both the single-MEC and multi-MEC systems. First, a more specific and detailed AR application model is established as a directed acyclic graph according to its internal functionality. Second, based on this AR model, a joint optimization problem of task offloading and resource allocation is formulated to minimize the energy consumption of each user subject to the latency requirement and the limited resources. The problem is a mixed multiuser competition and cooperation problem, which involves the task offloading decision, uplink/downlink transmission resources allocation, and computing resources allocation of users and MEC server. Since it is an NP-hard problem and the communication environment is dynamic, it is difficult for genetic algorithms or heuristic algorithms to solve. Therefore, we propose an intelligent and efficient resource allocation and task offloading algorithm based on the deep reinforcement learning framework of multiagent deep deterministic policy gradient (MADDPG) in a dynamic communication environment. Finally, simulation results show that the proposed algorithm can greatly reduce the energy consumption of each user terminal.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available