4.6 Article

Adversarial Attacks on Medical Image Classification

期刊

CANCERS
卷 15, 期 17, 页码 -

出版社

MDPI
DOI: 10.3390/cancers15174228

关键词

machine learning; artificial intelligence; adversarial learning; computer vision; metaheuristic

类别

向作者/读者索取更多资源

As we increasingly rely on advanced imaging for medical diagnosis, it is crucial that our computer programs accurately interpret these images. This study investigates how even small disruptions, such as changing a single pixel, can deceive our advanced algorithms. The findings highlight the concern about the reliability of current computer-aided diagnostic tools and the need for models that can resist such small disturbances.
Simple Summary As we increasingly rely on advanced imaging for medical diagnosis, it's vital that our computer programs can accurately interpret these images. Even a single mistaken pixel can lead to wrong predictions, potentially causing incorrect medical decisions. This study looks into how these tiny mistakes can trick our advanced algorithms. By changing just one or a few pixels on medical images, we tested how various computer models handled these changes. The findings showed that even small disruptions made it hard for the models to correctly interpret the images. This raises concerns about how reliable our current computer-aided diagnostic tools are and underscores the need for models that can resist such small disturbances.Abstract Due to the growing number of medical images being produced by diverse radiological imaging techniques, radiography examinations with computer-aided diagnoses could greatly assist clinical applications. However, an imaging facility with just a one-pixel inaccuracy will lead to the inaccurate prediction of medical images. Misclassification may lead to the wrong clinical decision. This scenario is similar to the adversarial attacks on deep learning models. Therefore, one-pixel and multi-pixel level attacks on a Deep Neural Network (DNN) model trained on various medical image datasets are investigated in this study. Common multiclass and multi-label datasets are examined for one-pixel type attacks. Moreover, different experiments are conducted in order to determine how changing the number of pixels in the image may affect the classification performance and robustness of diverse DNN models. The experimental results show that it was difficult for the medical images to survive the pixel attacks, raising the issue of the accuracy of medical image classification and the importance of the model's ability to resist these attacks for a computer-aided diagnosis.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据