4.5 Article

Computing offloading and resource scheduling based on DDPG in ultra-dense edge computing networks

Journal

JOURNAL OF SUPERCOMPUTING
Volume -, Issue -, Pages -

Publisher

SPRINGER
DOI: 10.1007/s11227-023-05816-w

Keywords

Mobile edge computing; Ultra-dense network; Offloading; Non-orthogonal multiple access; Deep reinforcement learning

Ask authors/readers for more resources

To address the challenge of efficiently processing intensive applications in real-time for smart devices in healthcare IoT, a collaborative cloud-edge offloading model tailored for ultra-dense edge computing networks is developed. The model takes into account non-orthogonal multiple access (NOMA) as a physical technology and uses deep deterministic policy gradient to optimize the system. Simulation results show that the proposed scheme can significantly reduce the system cost.
To address the current challenge of smart devices in healthcare Internet of things (IoT) struggling to efficiently process intensive applications in real-time, a collaborative cloud-edge offloading model tailored for ultra-dense edge computing (UDEC) networks is developed. While numerous studies have delved into the optimization of offloading in mobile edge computing (MEC), it is imperative to consider non-orthogonal multiple access (NOMA) as a physical technology when addressing the offloading optimization process in MEC. The multiuser sharing of spectrum resources in NOMA can enhance the network spectrum utilization and reduce the computational delay when users transmit computing tasks. Consequently, a model for NOMA-assisted UDEC systems is proposed. The model takes into account joint offloading decisions, computational resources, and sub-channel resources and is modeled as a complex nonlinear mixed-integer programming problem. The aim is to decrease the task execution delay and energy consumption of smart devices while ensuring that users' maximum acceptable delay for processing medical computational tasks is met efficiently and in a timely manner. Deep deterministic policy gradient (DDPG), a deep reinforcement learning method, is employed to solve the joint optimization problem. The final simulation results show that the algorithm converges well. The proposed offloading scheme can reduce the system cost by 54.5 and 69.9% in comparison with scenarios where users solely perform local computations and offload their tasks to the base station (BS). The application of NOMA communication in our offloading scheme boosts network spectrum utilization and trims down the system cost by 87.09% when contrasted with orthogonal multiple access (OMA).

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available