3.9 Article

Reconstruction for Cherenkov-Excited Luminescence Scanned Tomography Based on Unet Network

Journal

Publisher

CHINESE LASER PRESS
DOI: 10.3788/CJL202148.1707001

Keywords

medical optics; Cherenkov-excited luminescence scanned imaging; tomography; Tikhonov regularization; Unet neural network

Categories

Ask authors/readers for more resources

A new tomographic reconstruction algorithm for CELSI is proposed based on a trained Unet neural network, which shows improved image quality and quantitative accuracy compared to traditional Tikhonov, sparse, and total variation regularizations. The algorithm can reconstruct luminescent sources effectively even at depths of < 50 mm, recovering size and fluorescence yield accurately. Numerical simulations demonstrate good generalizability of the algorithm when trained on single source datasets, and it also shows high computational efficiency compared to traditional algorithms.
Objective As a new molecular imaging modality, Cherenkov-excited luminescence scanned imaging (CELSI) has demonstrated a great potential, especially in radiation therapy diagnostics. However, it can not provide in-depth information about molecular probes. Therefore, it is necessary to develop tomographic algorithms for CELSI. However, reconstructing spatial distributions of luminescent sources from boundary measurements is a typical ill posed problem. Our previous work has demonstrated the feasibility and effectiveness of using Tikhonov and sparse regularizations for CELSI reconstruction. However, the quality of reconstructed images will degrade when the luminescent source is located at deep positions. The objective of this study is to develop a reconstruction algorithm for CELSI to improve the quality of reconstructed images. Methods A two-stage reconstruction algorithm is developed in this study. First, low-quality images were reconstructed on the basis of Tikhonov regularization at the first iteration. Then, the resultant images were input into a revised Unet network, which had an encoder-decoder architecture. The encoder comprised four convolution blocks and four downsampling layers. Every convolution block comprised two convolution layers, and each convolution had a kernel size of 3 x 3 with a stride of 1. The downsampling layer had a convolution of size 3 x 3 with a stride of 2. In upsampling networks, the transposed convolution with a kernel of 3 x 3 and stride of 2 was used to replace direct interpolation used in the standard Unet. Here, we used leaky rectified linear units as the activation function to intensify the network in each convolutional layer. The batch normalization technique was used to accelerate learning. In addition, skip connection was applied to connect the first and last layers. The feasibility of the algorithm was evaluated through numerical simulations. Training and test datasets were generated using an opensource software, NIRFAST. For comparison, our algorithm was compared with Tikhonov regularization, approximate message passing (AMP), and graph-total variation (Graph-TV). Results and Discussions First, a single circular target with 8 mm diameter was placed within the phantom with varying depths ranging from 10 mm to 50 mm. Although the four algorithms could reconstruct the distribution of targets, severe artifacts were found in images reconstructed by Tikhonov regularization, and the shapes of reconstructed targets were changed for AMP and Graph-TV ( Fig. 5). Additionally, the quality of reconstructed images by Tikhonov regularization, AMP, and Graph-TV degraded as the depth increased. By contrast, our results reveal that the quantitative accuracy of recovered distributions of luminescence sources could be significantly improved by the proposed algorithm, which achieved the best image quality with a high peak signal-to-noise ratio (>28 dB) and structural similarity (> 0. 92). Furthermore, experiments with two luminescent sources were conducted to evaluate the algorithm's performance. When the edge-to-edge distance of two luminescent sources was <5 mm, Tikhonov regularization, AMP, and Graph-TV failed to reconstruct the source distributions, while the proposed algorithm could distinguish the two sources even when the edge-to-edge distance was similar to 1 mm (Fig. 9). In addition, the size and fluorescence yield of the sources reconstructed using the proposed algorithms were very close to their real values (Fig. 11). Further, the generalizability of the proposed algorithm was evaluated using a network trained on single target datasets. Our results demonstrate that the proposed algorithm could reconstruct luminescent sources accurately even when the contrast between the source and background was reduced to 2 :1 (Fig. 12). The two luminescent sources could be distinguished well when the edge -to -edge distance was >3 mm ( Fig. 13). The computational efficiency for the four algorithms is also demonstrated. The three traditional reconstruction algorithms require >45 s, whereas our algorithm requires similar to 11 s (Table 1). Conclusions A tomographic reconstruction algorithm for CELSI is proposed to reconstruct distributions of luminescence sources based on a trained Unet neural network. Numerical simulations are used to evaluate the performance of our proposed algorithm. Our results reveal that both the image quality and quantitative accuracy of reconstructed fluorescence yield can be improved using the proposed algorithm compared with the conventional Tikhonov, sparse, and total variation regularizations. The proposed algorithm can reconstruct the distributions of luminescent sources when the depth is < 50 mm. Moreover, it can recover the size and fluorescence yield of luminescent sources with an edge -to-edge distance of 1 mm. Numerical simulations of multiple luminescent sources show that the proposed algorithm has good generalizability when trained on a dataset with a single luminescent source. Furthermore, it is computationally efficient. Although the network input of our algorithm is a low -quality image reconstructed using Tikhonov regularization, the images reconstructed using other algorithms can be taken as network input without any processing.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.9
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available