4.6 Article

Predicting Classifier Performance with Limited Training Data: Applications to Computer-Aided Diagnosis in Breast and Prostate Cancer

Journal

PLOS ONE
Volume 10, Issue 5, Pages -

Publisher

PUBLIC LIBRARY SCIENCE
DOI: 10.1371/journal.pone.0117900

Keywords

-

Funding

  1. National Cancer Institute of the National Institutes of Health [R01CA136535-01, R01CA140772-01, R21CA167811-01, R21CA179327-01, R21CA195152-01]
  2. National Institute of Diabetes and Digestive and Kidney Diseases [R01DK098503-02]
  3. DOD Prostate Cancer Synergistic Idea Development Award [PC120857]
  4. DOD Lung Cancer Idea Development New Investigator Award [LC130463]
  5. DOD Prostate Cancer Idea Development Award
  6. Ohio Third Frontier Technology development Grant
  7. CTSC Coulter Annual Pilot Grant
  8. Case Comprehensive Cancer Center Pilot Grant
  9. VelaSano Grant Cleveland Clinic
  10. Wallace H. Coulter Foundation Program in the Department of Biomedical Engineering at Case Western Reserve University

Ask authors/readers for more resources

Clinical trials increasingly employ medical imaging data in conjunction with supervised classifiers, where the latter require large amounts of training data to accurately model the system. Yet, a classifier selected at the start of the trial based on smaller and more accessible datasets may yield inaccurate and unstable classification performance. In this paper, we aim to address two common concerns in classifier selection for clinical trials: (1) predicting expected classifier performance for large datasets based on error rates calculated from smaller datasets and (2) the selection of appropriate classifiers based on expected performance for larger datasets. We present a framework for comparative evaluation of classifiers using only limited amounts of training data by using random repeated sampling (RRS) in conjunction with a cross-validation sampling strategy. Extrapolated error rates are subsequently validated via comparison with leave-one-out cross-validation performed on a larger dataset. The ability to predict error rates as dataset size increases is demonstrated on both synthetic data as well as three different computational imaging tasks: detecting cancerous image regions in prostate histopathology, differentiating high and low grade cancer in breast histopathology, and detecting cancerous metavoxels in prostate magnetic resonance spectroscopy. For each task, the relationships between 3 distinct classifiers (k-nearest neighbor, naive Bayes, Support Vector Machine) are explored. Further quantitative evaluation in terms of interquartile range (IQR) suggests that our approach consistently yields error rates with lower variability (mean IQRs of 0.0070, 0.0127, and 0.0140) than a traditional RRS approach (mean IQRs of 0.0297, 0.0779, and 0.305) that does not employ cross-validation sampling for all three datasets.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available