4.6 Article

A VGG attention vision transformer network for benign and malignant classification of breast ultrasound images

Journal

MEDICAL PHYSICS
Volume 49, Issue 9, Pages 5787-5798

Publisher

WILEY
DOI: 10.1002/mp.15852

Keywords

breast tumor; breast ultrasound image; classification; deep learning

Funding

  1. National Natural Science Foundation of China [61901024]
  2. Beijing Municipal Natural Science Foundation [L192057]
  3. Tianjin Key Laboratory of Optoelectronic Sensor and Sensing Network Technology

Ask authors/readers for more resources

In this study, a visual geometry group attention ViT (VGGA-ViT) network was proposed for breast ultrasound (BUS) image classification. By combining convolutional neural network (CNN) and vision transformer (ViT), the classification accuracy and objectivity were improved.
Purpose Breast cancer is the most commonly occurring cancer worldwide. The ultrasound reflectivity imaging technique can be used to obtain breast ultrasound (BUS) images, which can be used to classify benign and malignant tumors. However, the classification is subjective and dependent on the experience and skill of operators and doctors. The automatic classification method can assist doctors and improve the objectivity, but current convolution neural network (CNN) is not good at learning global features and vision transformer (ViT) is not good at extraction local features. In this study, we proposed a visual geometry group attention ViT (VGGA-ViT) network to overcome their disadvantages. Methods In the proposed method, we used a CNN module to extract the local features and employed a ViT module to learn the global relationship among different regions and enhance the relevant local features. The CNN module was named the VGGA module. It was composed of a VGG backbone, a feature extraction fully connected layer, and a squeeze-and-excitation block. Both the VGG backbone and the ViT module were pretrained on the ImageNet dataset and retrained using BUS samples in this study. Two BUS datasets were employed for validation. Results Cross-validation was conducted on two BUS datasets. For the Dataset A, the proposed VGGA-ViT network achieved high accuracy (88.71 +/-$\ \pm \ $1.55%), recall (90.73 +/-$\ \pm \ $1.57%), specificity (85.58 +/-$\ \pm \ $3.35%), precision (90.77 +/-$\ \pm \ $1.98%), F1 score (90.73 +/-$\ \pm \ $1.24%), and Matthews correlation coefficient (MCC) (76.34 +/- 7$\ \pm \ 7$3.29%), which were better than those of all compared previous networks in this study. The Dataset B was used as a separate test set, the test results showed that the VGGA-ViT had highest accuracy (81.72 +/-$\ \pm \ $2.99%), recall (64.45 +/-$\ \pm \ $2.96%), specificity (90.28 +/-$\ \pm \ $3.51%), precision (77.08 +/-$\ \pm \ $7.21%), F1 score (70.11 +/-$\ \pm \ $4.25%), and MCC (57.64 +/-$\ \pm \ $6.88%). Conclusions In this study, we proposed the VGGA-ViT for the BUS classification, which was good at learning both local and global features. The proposed network achieved higher accuracy than the compared previous methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available