Journal
2019 57TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON)
Volume -, Issue -, Pages 922-929Publisher
IEEE
DOI: 10.1109/allerton.2019.8919799
Keywords
-
Funding
- European Union [749798]
- National Science Foundation (NSF) [1718494]
- Marie Curie Actions (MSCA) [749798] Funding Source: Marie Curie Actions (MSCA)
- Direct For Computer & Info Scie & Enginr
- Division of Computing and Communication Foundations [1718494] Funding Source: National Science Foundation
Ask authors/readers for more resources
In this paper, we use reinforcement learning to find effective decoding strategies for binary linear codes. We start by reviewing several iterative decoding algorithms that involve a decision-making process at each step, including bit-flipping (BF) decoding, residual belief propagation, and anchor decoding. We then illustrate how such algorithms can be mapped to Markov decision processes allowing for data-driven learning of optimal decision strategies, rather than basing decisions on heuristics or intuition. As a case study, we consider BF decoding for both the binary symmetric and additive white Gaussian noise channel. Our results show that learned BF decoders can offer a range of performance-complexity trade-offs for the considered Reed-Muller and BCH codes, and achieve near-optimal performance in some cases. We also demonstrate learning convergence speed-ups when biasing the learning process towards correct decoding decisions, as opposed to relying only on random explorations and past knowledge.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available