期刊
IEEE SENSORS JOURNAL
卷 23, 期 15, 页码 17771-17783出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSEN.2023.3285751
关键词
Q-learning; underwater wireless sensor networks (UWSNs); opportunistic routing (OR); void region; routing protocol
This article proposes a reinforcement learning-based opportunistic routing protocol (DROR) to address the issue of void regions in underwater wireless sensor networks (UWSNs). The protocol combines reinforcement learning (RL) with opportunistic routing (OR) to ensure real-time performance and energy efficiency considering limited energy and the underwater environment. It includes a void recovery mechanism and a relative Q-based dynamic scheduling strategy to enable reliable transmission and efficient forwarding along the global optimal routing path.
An efficient routing protocol is critical for the data transmission of underwater wireless sensor networks (UWSNs). Aiming to the problem of void region in UWSNs, this article proposes a reinforcement learning-based opportunistic routing protocol (DROR). By considering the limited energy and underwater environment, DROR is a receiver-based routing protocol, and combines reinforcement learning (RL) with opportunistic routing (OR) to ensure real-time performance of data transmission as well as energy efficiency. To achieve reliable transmission when encountering void regions, a void recovery mechanism is designed to enable packets to bypass void nodes and continue forwarding. Furthermore, a relative Q-based dynamic scheduling strategy is proposed to ensure that packets can efficiently forward along the global optimal routing path. Simulation results show that the proposed protocol performs well in terms of end-to-end delay, reliability, and energy efficiency in UWSNs.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据