3.8 Proceedings Paper

Artificial Intelligence Enabled Distributed Edge Computing for Internet of Things Applications

出版社

IEEE COMPUTER SOC
DOI: 10.1109/DCOSS49796.2020.00077

关键词

Edge Computing; Game Theory; Reinforcement Learning; Internet of Things

资金

  1. NSF [CRII-1849739]
  2. Hellenic Foundation for Research and Innovation (H.F.R.I.) under the First Call for H.F.R.I. Research Projects to support Faculty members and Researchers and the procurement of high-cost research equipment grant [HFRI-FM17-2436]

向作者/读者索取更多资源

Artificial Intelligence (AI) based techniques are typically used to model decision making in terms of strategies and mechanisms that can result in optimal payoffs for a number of interacting entities, often presenting antagonistic behaviors. In this paper, we propose an AI-enabled multi-access edge computing (MEC) framework, supported by computing-equipped Unmanned Aerial Vehicles (UAVs) to facilitate IoT applications. Initially, the problem of determining the IoT nodes optimal data offloading strategies to the UAV-mounted MEC servers, while accounting for the IoT nodes' communication and computation overhead, is formulated based on a game-theoretic model. The existence of at least one Pure Nash Equilibrium (PNE) point is shown by proving that the game is submodular. Furthermore, different operation points (i.e. offloading strategies) are obtained and studied, based either on the outcome of Best Response Dynamics (BRD) algorithm, or via alternative reinforcement learning approaches (i.e. gradient ascent, log-linear, and Q-learning algorithms), which explore and learn the environment towards determining the users' stable data offloading strategies. The corresponding outcomes and inherent features of these approaches are critically compared against each other, via modeling and simulation.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据