4.8 Article

Collaborative Computation Offloading and Resource Allocation in Multi-UAV-Assisted IoT Networks: A Deep Reinforcement Learning Approach

期刊

IEEE INTERNET OF THINGS JOURNAL
卷 8, 期 15, 页码 12203-12218

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JIOT.2021.3063188

关键词

Collaborative computation offloading; deep reinforcement learning (DRL); Edge Internet of Things (EIoT); IoT network; multi-UAV network; resource allocation

资金

  1. National Natural Science Research Foundation of China [61771098]
  2. Fundamental Research Funds for Central Universities [ZYGX2018J068]
  3. Department of Science and Technology of Sichuan province [2020YFQ0025]

向作者/读者索取更多资源

In the fifth-generation (5G) wireless networks, Edge-Internet-of-Things (EIoT) devices are expected to generate large amounts of data. Mobile-edge computing (MEC) is a promising solution for offloading tasks to nearby MEC servers in order to improve service quality. Unmanned aerial vehicles (UAVs) can also be deployed during emergencies to restore networks and act as computational nodes. This article proposes a model-free deep reinforcement learning (DRL)-based collaborative computation offloading and resource allocation scheme for emergency situations in an aerial to ground (A2G) network. Each agent learns efficient computation offloading policies independently and aims to minimize task execution delay and energy consumption.
In the fifth-generation (5G) wireless networks, Edge-Internet-of-Things (EIoT) devices are envisioned to generate huge amounts of data. Due to the limitation of computation capacity and battery life of devices, all tasks cannot be processed by these devices. However, mobile-edge computing (MEC) is a very promising solution enabling offloading of tasks to nearby MEC servers to improve quality of service. Also, during emergency situations in areas where network failure exists, unmanned aerial vehicles (UAVs) can be deployed to restore the network by acting as Aerial Base Stations and computational nodes for the edge network. In this article, we consider a central network controller who trains observations and broadcasts the trained data to a multi-UAV cluster network. Each UAV cluster head acts as an agent and autonomously allocates resources to EIoT devices in a decentralized fashion. We propose model-free deep reinforcement learning (DRL)-based collaborative computation offloading and resource allocation (CCORA-DRL) scheme in an aerial to ground (A2G) network for emergency situations, which can control the continuous action space. Each agent learns efficient computation offloading policies independently in the network and checks the statuses of the UAVs through Jain's Fairness index. The objective is minimizing task execution delay and energy consumption and acquiring an efficient solution by adaptive learning from the dynamic A2G network. Simulation results reveal that our scheme through deep deterministic policy gradient, effectively learns the optimal policy, outperforming A3C, deep Q-network and greedy-based offloading for local computation in stochastic dynamic environments.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据