Journal
IEEE TRANSACTIONS ON MEDICAL IMAGING
Volume 41, Issue 4, Pages 903-914Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMI.2021.3125777
Keywords
Image segmentation; Neurons; Three-dimensional displays; Image reconstruction; Decoding; Feature extraction; Transforms; Image segmentation; neuron reconstruction; deep learning; microscopy images
Categories
Funding
- National Natural Science Foundation of China [62073126, 61771189]
- Hunan Provincial Natural Science Foundation of China [2020JJ2008]
- Key Research and Development Program of Hunan Province [2022WK2011]
Ask authors/readers for more resources
This paper presents a 3D neuron segmentation network called SGSNet that enhances weak neuronal structures and removes background noises. The network utilizes two decoding paths, one for acquiring segmentation maps and the other for detecting neuronal structures. A structure attention module is designed to integrate features and provide contextual guidance to improve segmentation performance.
Digital reconstruction of neuronal morphologies in 3D microscopy images is critical in the field of neuroscience. However, most existing automatic tracing algorithms cannot obtain accurate neuron reconstruction when processing 3D neuron images contaminated by strong background noises or containing weak filament signals. In this paper, we present a 3D neuron segmentation network named Structure-Guided Segmentation Network (SGSNet) to enhance weak neuronal structures and remove background noises. The network contains a shared encoding path but utilizes two decoding paths called Main Segmentation Branch (MSB) and Structure-Detection Branch (SDB), respectively. MSB is trained on binary labels to acquire the 3D neuron image segmentation maps. However, the segmentation results in challenging datasets often contain structural errors, such as discontinued segments of the weak-signal neuronal structures and missing filaments due to low signal-to-noise ratio (SNR). Therefore, SDB is presented to detect the neuronal structures by regressing neuron distance transform maps. Furthermore, a Structure Attention Module (SAM) is designed to integrate the multi-scale feature maps of the two decoding paths, and provide contextual guidance of structural features from SDB to MSB to improve the final segmentation performance. In the experiments, we evaluate our model in two challenging 3D neuron image datasets, the BigNeuron dataset and the Extended Whole Mouse Brain Sub-image (EWMBS) dataset. When using different tracing methods on the segmented images produced by our method rather than other state-of-the-art segmentation methods, the distance scores gain 42.48% and 35.83% improvement in the BigNeuron dataset and 37.75% and 23.13% in the EWMBS dataset.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available