期刊
OPTICS LETTERS
卷 48, 期 19, 页码 4945-4948出版社
Optica Publishing Group
DOI: 10.1364/OL.499966
关键词
-
类别
This study presents a self-denoising method for OCT images using single spectrogram-based deep learning. It can customize different noises in different images with low computation. Experimental results show that the method effectively reduces speckle patterns and stripes, improving the signal-to-noise ratio and image contrast.
The presence of noise in images reconstructed with optical coherence tomography (OCT) is a key issue which limits the further improvement of the image quality. In this Letter, for the first time, to the best of our knowledge, a selfdenoising method for OCT images is presented with single spectrogram-based deep learning. Different noises in different images could be customized with an extremely low computation. The deep-learning model consists of two fully connected layers, two convolution layers, and one deconvolution layer, with the input being the raw interference spectrogram and the label being its reconstructed image using the Fourier transform. The denoising image could be calculated by subtracting the noise predicted by our model from the label image. The OCT images of the TiO2 phantom, the orange, and the zebrafish obtained with our spectral domain OCT system are used as examples to demonstrate the capability of our method. The results demonstrate its effectiveness in reducing noises such as speckle patterns and horizontal and vertical stripes. Compared with the label image, the signal-to-noise ratio could be improved by 35.0 dB, and the image contrast could be improved by a factor of two. Compared with the results denoised by the average method, the mean peak signal-to-noise ratio is 26.2 dB. (c) 2023 Optica Publishing Group
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据