4.6 Article

Automatic classification of dual-modalilty, smartphone-based oral dysplasia and malignancy images using deep learning

Journal

BIOMEDICAL OPTICS EXPRESS
Volume 9, Issue 11, Pages 5318-5329

Publisher

OPTICAL SOC AMER
DOI: 10.1364/BOE.9.005318

Keywords

-

Funding

  1. National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health [UH2EB022623]
  2. National Institutes of Health (NIH) [S10OD018061]
  3. National Institutes of Health Biomedical Imaging and Spectroscopy Training [T32EB000809]

Ask authors/readers for more resources

With the goal to screen high-risk populations for oral cancer in low- and middle-income countries (LMICs), we have developed a low-cost. portable. easy to use smartphone-based immoral dual-modality imaging platform. In this paper we present an image classification approach based on autofluorescence and white light images using deep learning methods. The information from the autofluorescence and white light image pair is extracted. calculated, and fused to feed the deep learning neural networks. We have investigated and compared the performance of different convolutional neural networks, transfer learning. and several regularization techniques for oral cancer classification. Our experimental results demonstrate the effectiveness of deep learning methods in classifying dual-modal images for oral cancer detection. (C) 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available