期刊
IEEE INTERNET OF THINGS JOURNAL
卷 10, 期 13, 页码 11486-11496出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JIOT.2023.3244909
关键词
Computation offloading; deep reinforcement learning; graph convolutional network; Internet of Things (IoT); multiaccess edge computing (MEC)
In this paper, we propose a cache-aided MEC (CA-MEC) offloading framework for joint optimization of communication, computing, and caching resources in the MEC-enabled IoT. We formulate the problem as a multiagent decision problem and apply the deep graph convolution reinforcement learning method to learn optimal strategies cooperatively. Simulations demonstrate that our method is highly effective for computation offloading and resource allocation, achieving superior results in a large-scale network.
With the growing demand for latency-sensitive and compute-intensive services in the Internet of Things (IoT), multiaccess edge computing (MEC)-enabled IoT is envisioned as a promising technique that allows network nodes to have computing and caching capabilities. In this article, we propose a cache-aided MEC (CA-MEC) offloading framework for joint optimization of communication, computing, and caching (3C) resources in the MEC-enabled IoT. Our goal is to optimize the offloading decision and resource allocation strategy to minimize the system latency subject to dynamic cache capacities and computing resource constraints. We first formulate this optimization problem as a multiagent decision problem, a partially observable Markov decision process (POMDP). Then, the deep graph convolution reinforcement learning (DGRL) method is applied to motivate the agents to learn optimal strategies cooperatively in a highly dynamic environment. Simulations show that our method is highly effective for computation offloading and resource allocation and performs superior results in a large-scale network.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据