3.8 Proceedings Paper

B-line Detection and Localization by Means of Deep Learning: Preliminary In-vitro Results

Journal

IMAGE ANALYSIS AND RECOGNITION, ICIAR 2019, PT I
Volume 11662, Issue -, Pages 418-424

Publisher

SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-030-27202-9_38

Keywords

Lung ultrasound; B-lines; Image analysis; Deep learning

Ask authors/readers for more resources

Lung ultrasound imaging is nowadays receiving growing attention. In fact, the analysis of specific artefactual patterns reveals important diagnostic information. A- and B-line artifacts are particularly important. A-lines are generally considered a sign of a healthy lung, while B-line artifacts correlate with a large variety of pathological conditions. B-lines have been found to indicate an increase in extravascular lung water, the presence of interstitial lung diseases, non-cardiogenic lung edema, interstitial pneumonia and lung contusion. The capability to accurately and objectively detect and localize B-lines in a lung ultrasound video is therefore of great clinical interest. In this paper, we present a method aimed at supporting clinicians in the analysis of ultrasound videos by automatically detecting and localizing B-lines, in real-time. To this end, modern deep learning strategies have been used and a fully convolutional neural network has been trained to detect B-lines in B-mode images of dedicated ultrasound phantoms. Furthermore, neural attention maps have been calculated to visualize which components in the image triggered the network, thereby offering simultaneous weakly-supervised localization. An accuracy, sensitivity, specificity, negative and positive predictive value equal to 0.917, 0.915, 0.918, 0.950 and 0.864 were achieved in-vitro using data from dedicated lung-mimicking phantoms, respectively.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available