4.8 Article

Energy-Efficient Resource Allocation for Blockchain-Enabled Industrial Internet of Things With Deep Reinforcement Learning

Journal

IEEE INTERNET OF THINGS JOURNAL
Volume 8, Issue 4, Pages 2318-2329

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JIOT.2020.3030646

Keywords

Blockchain; Servers; Optimization; Security; Task analysis; Blockchain; deep reinforcement learning (DRL); Industrial Internet of Things (IIoT); mobile-edge computing (MEC); resource allocation

Funding

  1. National Natural Science Foundation of China [61901011, 61671029]
  2. Foundation of Beijing Municipal Commission of Education [KM202010005017]

Ask authors/readers for more resources

In this article, mobile-edge computing is integrated into blockchain-enabled IIoT systems to enhance device computing capabilities and consensus process efficiency, while considering the weighted system cost. Through an optimization framework and deep reinforcement learning, some issues and challenges in existing solutions are successfully addressed.
Industrial Internet of Things (IIoT) has emerged with the developments of various communication technologies. In order to guarantee the security and privacy of massive IIoT data, blockchain is widely considered as a promising technology and applied into IIoT. However, there are still several issues in the existing blockchain-enabled IIoT: 1) unbearable energy consumption for computation tasks; 2) poor efficiency of consensus mechanism in blockchain; and 3) serious computation overhead of network systems. To handle the above issues and challenges, in this article, we integrate mobile-edge computing (MEC) into blockchain-enabled IIoT systems to promote the computation capability of IIoT devices and improve the efficiency of the consensus process. Meanwhile, the weighted system cost, including the energy consumption and the computation overhead, are jointly considered. Moreover, we propose an optimization framework for blockchain-enabled IIoT systems to decrease consumption, and formulate the proposed problem as a Markov decision process (MDP). The master controller, offloading decision, block size, and computing server can be dynamically selected and adjusted to optimize the devices energy allocation and reduce the weighted system cost. Accordingly, due to the high-dynamic and large-dimensional characteristics, deep reinforcement learning (DRL) is introduced to solve the formulated problem. Simulation results demonstrate that our proposed scheme can improve system performance significantly compared to other existing schemes.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available