4.7 Article

Deep learning for liver tumor diagnosis part II: convolutional neural network interpretation using radiologic imaging features

Journal

EUROPEAN RADIOLOGY
Volume 29, Issue 7, Pages 3348-3357

Publisher

SPRINGER
DOI: 10.1007/s00330-019-06214-8

Keywords

Liver cancer; Artificial intelligence; Deep learning

Funding

  1. Radiological Society of North America (RSNA Research Resident Grant) [RR1731]
  2. National Institutes of Health [NIH/NCI R01 CA206180]

Ask authors/readers for more resources

ObjectivesTo develop a proof-of-concept interpretable deep learning prototype that justifies aspects of its predictions from a pre-trained hepatic lesion classifier.MethodsA convolutional neural network (CNN) was engineered and trained to classify six hepatic tumor entities using 494 lesions on multi-phasic MRI, described in Part 1. A subset of each lesion class was labeled with up to four key imaging features per lesion. A post hoc algorithm inferred the presence of these features in a test set of 60 lesions by analyzing activation patterns of the pre-trained CNN model. Feature maps were generated that highlight regions in the original image that correspond to particular features. Additionally, relevance scores were assigned to each identified feature, denoting the relative contribution of a feature to the predicted lesion classification.ResultsThe interpretable deep learning system achieved 76.5% positive predictive value and 82.9% sensitivity in identifying the correct radiological features present in each test lesion. The model misclassified 12% of lesions. Incorrect features were found more often in misclassified lesions than correctly identified lesions (60.4% vs. 85.6%). Feature maps were consistent with original image voxels contributing to each imaging feature. Feature relevance scores tended to reflect the most prominent imaging criteria for each class.ConclusionsThis interpretable deep learning system demonstrates proof of principle for illuminating portions ofa pre-trained deep neural network's decision-making, by analyzing inner layers and automatically describing features contributing to predictions.Key Points center dot An interpretable deep learning system prototypecan explain aspects of its decision-making by identifying relevant imaging features and showing where these features are found on an image, facilitating clinical translation.center dot By providing feedback on the importance of various radiological features in performing differential diagnosis, interpretable deep learning systems have the potential to interface with standardized reporting systems such as LI-RADS, validating ancillary features and improving clinical practicality.center dot An interpretable deep learning system could potentially add quantitative data to radiologic reports and serve radiologists with evidence-based decision support.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available