4.3 Article

Fully automatic tumor segmentation of breast ultrasound images with deep learning

Journal

Publisher

WILEY
DOI: 10.1002/acm2.13863

Keywords

automatic segmentation; breast cancer; breast ultrasound; deep learning

Ask authors/readers for more resources

This study proposed a novel model for breast ultrasound screening that includes a classification branch and a segmentation branch, achieving better segmentation performance and good transferability. The model was trained and tested on a large dataset and demonstrated potential for use in fully automatic breast ultrasound health screening.
BackgroundBreast ultrasound (BUS) imaging is one of the most prevalent approaches for the detection of breast cancers. Tumor segmentation of BUS images can facilitate doctors in localizing tumors and is a necessary step for computer-aided diagnosis systems. While the majority of clinical BUS scans are normal ones without tumors, segmentation approaches such as U-Net often predict mass regions for these images. Such false-positive problem becomes serious if a fully automatic artificial intelligence system is used for routine screening. MethodsIn this study, we proposed a novel model which is more suitable for routine BUS screening. The model contains a classification branch that determines whether the image is normal or with tumors, and a segmentation branch that outlines tumors. Two branches share the same encoder network. We also built a new dataset that contains 1600 BUS images from 625 patients for training and a testing dataset with 130 images from 120 patients for testing. The dataset is the largest one with pixel-wise masks manually segmented by experienced radiologists. Our code is available at . ResultsThe area under the receiver operating characteristic curve (AUC) for classifying images into normal/abnormal categories was 0.991. The dice similarity coefficient (DSC) for segmentation of mass regions was 0.898, better than the state-of-the-art models. Testing on an external dataset gave a similar performance, demonstrating a good transferability of our model. Moreover, we simulated the use of the model in actual clinic practice by processing videos recorded during BUS scans; the model gave very low false-positive predictions on normal images without sacrificing sensitivities for images with tumors. ConclusionsOur model achieved better segmentation performance than the state-of-the-art models and showed a good transferability on an external test set. The proposed deep learning architecture holds potential for use in fully automatic BUS health screening.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.3
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available