4.6 Article

Chest X-Ray Outlier Detection Model Using Dimension Reduction and Edge Detection

Journal

IEEE ACCESS
Volume 9, Issue -, Pages 86096-86106

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2021.3086103

Keywords

X-ray imaging; Medical diagnostic imaging; Diseases; Artificial intelligence; Lung; Dimensionality reduction; Principal component analysis; Computer-aided diagnosis (CAD) system; feature extraction; line feature analysis (LFA); RNN; deep learning

Funding

  1. National Research Foundation of Korea (NRF) - Korea Government [2019R1F1A1060328]
  2. National Research Foundation of Korea [2019R1F1A1060328] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)

Ask authors/readers for more resources

This paper proposes a chest X-ray outlier detection model using dimension reduction and edge detection to address the high costs incurred in the medical field due to the use of high-spec equipment and energy consumption. Experimental results based on the COVID-chest X-ray dataset show that LFA-RNN achieved the highest accuracy and lowest loss among the models evaluated.
With the advancement of Artificial Intelligence technology, the development of various applied software and studies are actively conducted on detection, classification, and prediction through interdisciplinary convergence and integration. Among them, medical AI has been drawing huge interest and popularity in Computer-Aided Diagnosis, which collects human body signals to predict abnormal symptoms of health, and diagnoses diseases through medical images such as X-ray and CT. Since X-ray and CT in medicine use high-resolution images, they require high specification equipment and huge energy consumption due to high computation in learning and recognition, incurring huge costs to create an environment for operation. Thus, this paper proposes a chest X-ray outlier detection model using dimension reduction and edge detection to solve these issues. The proposed method scans an X-ray image using a window of a certain size, conducts difference imaging of adjacent segment-images, and extracts the edge information in a binary format through the AND operation. To convert the extracted edge, which is visual information, into a series of lines, it is computed in convolution with the detection filter that has a coefficient of 2(n) and the lines are divided into 16 types. By counting the converted data, a one-dimensional 16-size array per one segment-image is produced, and this reduced data is used as an input to the RNN-based learning model. In addition, the study conducted various experiments based on the COVID-chest X-ray dataset to evaluate the performance of the proposed model. According to the experiment results, the LFA-RNN showed the highest accuracy at 97.5% in the learning calculated through learning, followed by CRNN 96.1%, VGG 96.6%, AlexNet 94.1%, Conv1D 79.4%, and DNN 78.9%. In addition, LFA-RNN showed the lowest loss at about 0.0357.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available