期刊
JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES
卷 34, 期 6, 页码 2666-2679出版社
ELSEVIER
DOI: 10.1016/j.jksuci.2020.03.012
关键词
Block error rate; Signal to noise ratio; Additive white Gaussian noise; Recurrent neural network; Deep learning; Turbo code
This paper examines the performance of neural Turbo decoder and deep learning-based Turbo decoder, comparing them with the conventional convolutional Viterbi decoder. The study is conducted on different structures and analyzes their performance under different input data lengths and code rates.
Application of deep learning to error control coding is gaining special attention and neural network architectures on decoding are approached to compare with conventional ones. Turbo codes conventionally use BCJR algorithm for decoding. In this paper, performances of neural Turbo decoder and deep learningbased Turbo decoder are examined. A category of sequential codes are utilized to construct the RSC (Recursive Systematic Convolutional) codes as basic elements for Turbo encoder. Sequential codes suit the requirement of memory element present in convolution codes, which act as components for Turbo encoder. Turbo decoders are constructed by two means; as neural Turbo decoder and deep learning Turbo decoder. Both structures are based on recurrent neural network (RNN) architectures. RNN architectures are preferred due to the presence of memory as a feature. BER performance of both is compared with that of a convolutional Viterbi decoder in awgn channel. Both the structures are studied for different input data-lengths and code rates. (c) 2020 The Authors. Published by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据