4.8 Article

Collaboration-Aware Relay Selection for AUV in Internet of Underwater Network: Evolving Contextual Bandit Learning Approach

Journal

IEEE INTERNET OF THINGS JOURNAL
Volume 10, Issue 3, Pages 2430-2443

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JIOT.2022.3211953

Keywords

Relays; Vehicle dynamics; Heuristic algorithms; Clustering algorithms; Topology; Reliability; Pipelines; Collaborative effect; contextual bandit learning; cooperative transmission; Internet of Underwater Things (IoUT); relay selection

Ask authors/readers for more resources

In this article, we propose a new contextual multiarmed bandit with evolving relay set learning framework to address crucial issues in Internet of Underwater Things. The framework incorporates collaborative effects and contextual environment factors to enhance relay selection and transmission performance. The designed collaboration-aware online contextual bandit learning algorithm enables adaptive relay switching and high-capacity transmission. Extensive simulations demonstrate the effectiveness of the proposed approach.
In Internet of Underwater Things, data collection is assisted by autonomous underwater vehicle (AUV) to enhance the reliable transmission. AUV acts as a mobile collector and transmits the collected data to the station via relay nodes. However, the highly mobile nature of AUV needs an adaptive and efficient relay selection scheme for achieving good capacity performance. In this article, we propose a new contextual multiarmed bandit with evolving relay set (CMAB-ERS) learning framework, which successfully addresses crucial issues, including dynamic environment conditions and evolving relay set. To deal with the evolving relay set, CMAB-ERS incorporates collaborative effects into inference as well as learning processes, the new relays will acquire prior knowledge by having experienced nodes sharing observations, reducing the learning time significantly. To overcome the uncertainty of environmental information, we exploit the contextual environment factors to assist relay reward estimation and execute time-sensitive parameter update after every transmit-receive cycle, aiming for minimizing potential loss due to the time-varying channel. Correspondingly, the collaboration-aware online contextual bandit learning (COCBL) algorithm is designed that enables AUV to switch optimal relay adaptively and promises high-capacity transmission. Further, we rigorously prove the convergence of the COCBL algorithm by considering the evolving relay set and give its upper bound on the cumulative regret. Finally, extensive simulation results elucidate the effectiveness of the proposed COCBL.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available