4.7 Article

Is the aspect ratio of cells important in deep learning? A robust comparison of deep learning methods for multi-scale cytopathology cell image classification: From convolutional neural networks to visual transformers

Journal

COMPUTERS IN BIOLOGY AND MEDICINE
Volume 141, Issue -, Pages -

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.compbiomed.2021.105026

Keywords

Cervical cancer; Deep learning; Pap smear; Aspect ratio of cells; Visual transformer; Robustness comparison

Funding

  1. National Natural Science Foundation of China [61806047]
  2. Fundamental Research Funds for the Central Universities [N2019003]

Ask authors/readers for more resources

Cervical cancer is a common and fatal cancer in women, and a computer-aided diagnosis system based on deep learning has been developed to assist in screening. Resizing clinical medical images directly affects accuracy due to inconsistent dimensions. However, existing studies have shown that deep learning models are robust to changes in image aspect ratios.
Cervical cancer is a very common and fatal type of cancer in women. Cytopathology images are often used to screen for this cancer. Given that there is a possibility that many errors can occur during manual screening, a computer-aided diagnosis system based on deep learning has been developed. Deep learning methods require a fixed dimension of input images, but the dimensions of clinical medical images are inconsistent. The aspect ratios of the images suffer while resizing them directly. Clinically, the aspect ratios of cells inside cytopathological images provide important information for doctors to diagnose cancer. Therefore, it is difficult to resize directly. However, many existing studies have resized the images directly and have obtained highly robust classification results. To determine a reasonable interpretation, we have conducted a series of comparative experiments. First, the raw data of the SIPaKMeD dataset are pre-processed to obtain standard and scaled datasets. Then, the datasets are resized to 224 x 224 pixels. Finally, 22 deep learning models are used to classify the standard and scaled datasets. The results of the study indicate that deep learning models are robust to changes in the aspect ratio of cells in cervical cytopathological images. This conclusion is also validated via the Herlev dataset.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available