4.7 Article

Target attack on biomedical image segmentation model based on multi-scale gradients

期刊

INFORMATION SCIENCES
卷 554, 期 -, 页码 33-46

出版社

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2020.12.013

关键词

Adversarial example; Target attack; Multi-scale gradients; Deep learning security; Biomedical image segmentation

资金

  1. National Natural Science Foundation of China [61673396, U19A2073, 61976245]

向作者/读者索取更多资源

This paper investigates the impact of adversarial examples on biomedical image segmentation models, proposing a multi-scale gradient-based attack method. Experimental results demonstrate that the predicted masks generated by this method have high intersection over union and pixel accuracy with the target masks, while reducing the distances between adversarial and clean examples compared to the state-of-the-art method.
Research shows that deep neural networks are vulnerable to adversarial examples due to the highly linear nature of deep neural networks (DNNs). Therefore, adversarial examples involve security of deep learning. However, there is a lack of research on the impact of adversarial examples on the biomedical segmentation model. Since a large part of medical image problems are segmentation problems, this paper analyzes the impact of adversarial examples on image segmentation models based on deep learning. We propose to fool the biomedical segmentation model and generate target segmentation masks with feature space perturbations and cross-entropy loss function. Different from the traditional gradient-based attack methods, which usually use the gradient of the final loss function, this paper adopts a Multi-scale Attack (MSA) method based on multi-scale gradients. Extensive experiments to attack U-Net on the ISIC skin lesion segmentation challenge dataset and the glaucoma optic disc segmentation dataset have proved that the predicted mask generated by this method has a high intersection over union(IoU) and pixel accuracy with the target mask. Besides, the L-2 and L-infinity distances between the adversarial and clean examples are reduced compared with the state-of-the-art method. (C) 2020 Elsevier Inc. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据