4.5 Article

Classification of rotator cuff tears in ultrasound images using deep learning models

Journal

MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING
Volume 60, Issue 5, Pages 1269-1278

Publisher

SPRINGER HEIDELBERG
DOI: 10.1007/s11517-022-02502-6

Keywords

Rotator cuff tears; Deep learning; Convolutional neural network; Ultrasound; Transfer learning

Funding

  1. Korea Ministry of Environment (MOE) [2018001360004]
  2. National Research Foundation of Korea (NRF) - Korea government (MSIT) [NRF-2018R1D1A1B07040886, NRF-2021R1F1A1060436]

Ask authors/readers for more resources

This study developed an automated classification method for rotator cuff tears and provided visualization of tear location using deep learning algorithms and ultrasound images. Among the five pre-trained models, DenseNet121 demonstrated the best classification performance, confirming the feasibility of using deep learning and ultrasound images to assist in diagnosing rotator cuff tears.
Rotator cuff tears (RCTs) are one of the most common shoulder injuries, which are typically diagnosed using relatively expensive and time-consuming diagnostic imaging tests such as magnetic resonance imaging or computed tomography. Deep learning algorithms are increasingly used to analyze medical images, but they have not been used to identify RCTs with ultrasound images. The aim of this study is to develop an approach to automatically classify RCTs and provide visualization of tear location using ultrasound images and convolutional neural networks (CNNs). The proposed method was developed using transfer learning and fine-tuning with five pre-trained deep models (VGG19, InceptionV3, Xception, ResNet50, and DenseNet121). The Bayesian optimization method was also used to optimize hyperparameters of the CNN models. A total of 194 ultrasound images from Kosin University Gospel Hospital were used to train and test the CNN models by five-fold cross-validation. Among the five models, DenseNet121 demonstrated the best classification performance with 88.2% accuracy, 93.8% sensitivity, 83.6% specificity, and AUC score of 0.832. A gradient-weighted class activation mapping (Grad-CAM) highlighted the sensitive features in the learning process on ultrasound images. The proposed approach demonstrates the feasibility of using deep learning and ultrasound images to assist RCTs' diagnosis.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available