4.7 Article

Deep Learning-Aided Perturbation Model-Based Fiber Nonlinearity Compensation

Journal

JOURNAL OF LIGHTWAVE TECHNOLOGY
Volume 41, Issue 12, Pages 3976-3985

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JLT.2023.3279449

Keywords

Optical fiber communication; nonlinearity compensation; machine learning; perturbation theory-based nonlinearity compensation; recurrent neural network

Ask authors/readers for more resources

Fiber nonlinearity effects on long-haul optical fiber communication links are limited by conventional nonlinearity compensation methods. Recently, machine learning techniques have been used to optimize these methods, claiming improved performance and reduced complexity. This paper revisits the benefits of learned nonlinearity compensation and proposes a fully learned structure using a recurrent neural network. Numerical simulations demonstrate improved performance and complexity trade-off compared to existing techniques.
Fiber nonlinearity effects cap achievable rates and ranges in long-haul optical fiber communication links. Conventional nonlinearity compensation methods, such as perturbation theory-based nonlinearity compensation (PB-NLC), attempt to compensate for the nonlinearity by approximating analytical solutions to the signal propagation over optical fibers. However, their practical usability is limited by model mismatch and the immense computational complexity associated with the analytical computation of perturbation triplets and the nonlinearity distortion field. Recently, machine learning techniques have been used to optimise parameters of PB-based approaches, which traditionally have been determined analytically from physical models. It has been claimed in the literature that the learned PB-NLC approaches have improved performance and/or reduced computational complexity over their non-learned counterparts. In this paper, we first revisit the acclaimed benefits of the learned PB-NLC approaches by carefully carrying out a comprehensive performance-complexity analysis utilizing state-of-the-art complexity reduction methods. Interestingly, our results show that least squares-based PB-NLC with clustering quantization has the best performance-complexity trade-off among the learned PB-NLC approaches. Second, we advance the state-of-the-art of learned PB-NLC by proposing and designing a fully learned structure by adopting the noiseless Manakov equation as the channel propagation model. We apply a bi-directional recurrent neural network for learning perturbation triplets that are alike those obtained from the analytical computation and are used as input features for the neural network to estimate the nonlinearity distortion field. Finally, we demonstrate through numerical simulations that our proposed fully learned approach achieves an improved performance-complexity trade-off compared to the existing learned and non-learned PB-NLC techniques.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available