4.7 Article

Low-Complexity Recurrent Neural Network Based Equalizer With Embedded Parallelization for 100-Gbit/s/λ PON

Journal

JOURNAL OF LIGHTWAVE TECHNOLOGY
Volume 40, Issue 5, Pages 1353-1359

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JLT.2021.3128579

Keywords

Equalizers; Artificial neural networks; Passive optical networks; Training; Neurons; Quantization (signal); Optical transmitters; Digital signal processing (DSP); intensity modulation and direct detection (IMDD); machine learning; neural network (NN); passive optical network (PON)

Ask authors/readers for more resources

This paper investigates the transmission of 100-Gbit/s/λ intensity modulation and direct detection passive optical network using four-level pulsed amplitude modulation. A low-complexity recurrent neural network equalizer is proposed to mitigate distortions caused by interference and fiber nonlinearity. Experimental results show that the proposed equalizer outperforms a fully-connected neural network, and an 8bits-integer-quantized neural network model is implemented using FPGA.
To meet the demand of emerging applications, such as fixed-mobile convergence for the fifth generation of mobile networks and beyond, a 100-Gbit/s/lambda access network becomes the next priority for the passive optical network roadmap. We experimentally demonstrate the transmission of 100-Gbit/s/lambda intensity modulation and direct detection passive optical network based on four-level pulsed amplitude modulation in the O-band by using 25G-class optics. To mitigate the severe distortions caused by inter-symbol interference and fiber nonlinearity, a low-complexity recurrent neural network based equalizer with parallel outputs is proposed. Experimental results show that the proposed recurrent neural network equalizer can consistently outperform fully-connected neural network with the same input/output size and number of training parameters. The neural network equalizer's sensitivity against quantization is also evaluated. To further understand the complexity and actual hardware resource consumption of the parallel-output equalizers, we implement an 8bits-integer-quantized neural network model using FPGA, with the benefits and challenges validated and discussed.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available