4.6 Article

A deep learning method for classifying mammographic breast density categories

期刊

MEDICAL PHYSICS
卷 45, 期 1, 页码 314-321

出版社

WILEY
DOI: 10.1002/mp.12683

关键词

BI-RADS; breast density; convolutional neural network (CNN); deep learning; digital mammography; transfer learning

资金

  1. National Institutes of Health (NIH)/National Cancer Institute (NCI) R01 grant [1R01CA193603]
  2. Radiological Society of North America (RSNA) Research Scholar Grant [RSCH1530]
  3. University of Pittsburgh Cancer Institute Precision Medicine Pilot Award from The Pittsburgh Foundation [MR2014-77613]
  4. NIH grant from the National Center for Advancing Translational Sciences (NCATS) [5UL1TR001857]
  5. NVIDIA Corporation

向作者/读者索取更多资源

Purpose: Mammographic breast density is an established risk marker for breast cancer and is visually assessed by radiologists in routine mammogram image reading, using four qualitative Breast Imaging and Reporting Data System (BI-RADS) breast density categories. It is particularly difficult for radiologists to consistently distinguish the two most common and most variably assigned BI-RADS categories, i.e., scattered density and heterogeneously dense. The aim of this work was to investigate a deep learning-based breast density classifier to consistently distinguish these two categories, aiming at providing a potential computerized tool to assist radiologists in assigning a BI-RADS category in current clinical workflow. Methods: In this study, we constructed a convolutional neural network (CNN)-based model coupled with a large (i.e., 22,000 images) digital mammogram imaging dataset to evaluate the classification performance between the two aforementioned breast density categories. All images were collected from a cohort of 1,427 women who underwent standard digital mammography screening from 2005 to 2016 at our institution. The truths of the density categories were based on standard clinical assessment made by board-certified breast imaging radiologists. Effects of direct training from scratch solely using digital mammogram images and transfer learning of a pretrained model on a large nonmedical imaging dataset were evaluated for the specific task of breast density classification. In order to measure the classification performance, the CNN classifier was also tested on a refined version of the mammogram image dataset by removing some potentially inaccurately labeled images. Receiver operating characteristic (ROC) curves and the area under the curve (AUC) were used to measure the accuracy of the classifier. Results: The AUC was 0.9421 when the CNN-model was trained from scratch on our own mammogram images, and the accuracy increased gradually along with an increased size of training samples. Using the pretrained model followed by a fine-tuning process with as few as 500 mammogram images led to an AUC of 0.9265. After removing the potentially inaccurately labeled images, AUC was increased to 0.9882 and 0.9857 for without and with the pretrained model, respectively, both significantly higher (P < 0.001) than when using the full imaging dataset. Conclusions: Our study demonstrated high classification accuracies between two difficult to distinguish breast density categories that are routinely assessed by radiologists. We anticipate that our approach will help enhance current clinical assessment of breast density and better support consistent density notification to patients in breast cancer screening. (C) 2017 American Association of Physicists in Medicine

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据