4.8 Article

Energy-Efficient Resource Allocation for Blockchain-Enabled Industrial Internet of Things With Deep Reinforcement Learning

期刊

IEEE INTERNET OF THINGS JOURNAL
卷 8, 期 4, 页码 2318-2329

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JIOT.2020.3030646

关键词

Blockchain; Servers; Optimization; Security; Task analysis; Blockchain; deep reinforcement learning (DRL); Industrial Internet of Things (IIoT); mobile-edge computing (MEC); resource allocation

资金

  1. National Natural Science Foundation of China [61901011, 61671029]
  2. Foundation of Beijing Municipal Commission of Education [KM202010005017]

向作者/读者索取更多资源

In this article, mobile-edge computing is integrated into blockchain-enabled IIoT systems to enhance device computing capabilities and consensus process efficiency, while considering the weighted system cost. Through an optimization framework and deep reinforcement learning, some issues and challenges in existing solutions are successfully addressed.
Industrial Internet of Things (IIoT) has emerged with the developments of various communication technologies. In order to guarantee the security and privacy of massive IIoT data, blockchain is widely considered as a promising technology and applied into IIoT. However, there are still several issues in the existing blockchain-enabled IIoT: 1) unbearable energy consumption for computation tasks; 2) poor efficiency of consensus mechanism in blockchain; and 3) serious computation overhead of network systems. To handle the above issues and challenges, in this article, we integrate mobile-edge computing (MEC) into blockchain-enabled IIoT systems to promote the computation capability of IIoT devices and improve the efficiency of the consensus process. Meanwhile, the weighted system cost, including the energy consumption and the computation overhead, are jointly considered. Moreover, we propose an optimization framework for blockchain-enabled IIoT systems to decrease consumption, and formulate the proposed problem as a Markov decision process (MDP). The master controller, offloading decision, block size, and computing server can be dynamically selected and adjusted to optimize the devices energy allocation and reduce the weighted system cost. Accordingly, due to the high-dynamic and large-dimensional characteristics, deep reinforcement learning (DRL) is introduced to solve the formulated problem. Simulation results demonstrate that our proposed scheme can improve system performance significantly compared to other existing schemes.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据