3.8 Proceedings Paper

Reinforcement Learning for Channel Coding: Learned Bit-Flipping Decoding

Publisher

IEEE
DOI: 10.1109/allerton.2019.8919799

Keywords

-

Funding

  1. European Union [749798]
  2. National Science Foundation (NSF) [1718494]
  3. Marie Curie Actions (MSCA) [749798] Funding Source: Marie Curie Actions (MSCA)
  4. Direct For Computer & Info Scie & Enginr
  5. Division of Computing and Communication Foundations [1718494] Funding Source: National Science Foundation

Ask authors/readers for more resources

In this paper, we use reinforcement learning to find effective decoding strategies for binary linear codes. We start by reviewing several iterative decoding algorithms that involve a decision-making process at each step, including bit-flipping (BF) decoding, residual belief propagation, and anchor decoding. We then illustrate how such algorithms can be mapped to Markov decision processes allowing for data-driven learning of optimal decision strategies, rather than basing decisions on heuristics or intuition. As a case study, we consider BF decoding for both the binary symmetric and additive white Gaussian noise channel. Our results show that learned BF decoders can offer a range of performance-complexity trade-offs for the considered Reed-Muller and BCH codes, and achieve near-optimal performance in some cases. We also demonstrate learning convergence speed-ups when biasing the learning process towards correct decoding decisions, as opposed to relying only on random explorations and past knowledge.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available