4.7 Article

Generalizing universal adversarial perturbations for deep neural networks

相关参考文献

注意:仅列出部分参考文献,下载原文获取全部文献信息。
Article Computer Science, Artificial Intelligence

Quantifying safety risks of deep neural networks

Peipei Xu et al.

Summary: This paper addresses the safety concerns of deep neural networks in critical sectors and proposes a generic method to quantify safety risks. By computing the maximum safe radius of various safety risks, the safety of the networks can be evaluated efficiently. The experimental results demonstrate that the proposed method achieves competitive performance in safety quantification.

COMPLEX & INTELLIGENT SYSTEMS (2023)

Proceedings Paper Computer Science, Artificial Intelligence

PRoA: A Probabilistic Robustness Assessment Against Functional Perturbations

Tianle Zhang et al.

Summary: Robustness measurement is important for safety-critical deep learning applications, but existing methods are not practical. Some methods are too strict in claiming that no perturbations can fool deep neural networks, while others focus only on certain types of perturbations. Therefore, we propose a novel and general probabilistic robustness assessment method (PRoA) based on adaptive concentration, which considers a broad range of functional perturbations. PRoA provides statistical guarantees on the probabilistic robustness of deep learning models after deployment, and our experiments show its effectiveness and scalability compared to existing baselines. Our tool, PRoA, is available on GitHub for reproducibility: https://github.com/TrustAI/PRoA.

MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT III (2023)

Article Computer Science, Artificial Intelligence

DIMBA: discretely masked black-box attack in single object tracking

Xiangyu Yin et al.

Summary: This paper proposes a novel adversarial attack method to generate noises for single object tracking under black-box settings. By utilizing reinforcement learning, the method can precisely localize important frame patches while reducing unnecessary computational queries overhead, achieving competitive or even better adversarial performance compared to existing techniques. Extensive experiments on multiple datasets demonstrate the effectiveness of the method.

MACHINE LEARNING (2022)

Article Computer Science, Artificial Intelligence

3DVerifier: efficient robustness verification for 3D point cloud models

Ronghui Mu et al.

Summary: 3DVerifier is an efficient and general verification framework for robustness of 3D point cloud models. It addresses the challenges of nonlinearity in multiplication layers and high computational complexity, achieving significant improvement in efficiency and accuracy compared to existing algorithms.

MACHINE LEARNING (2022)

Proceedings Paper Computer Science, Artificial Intelligence

Data-free Universal Adversarial Perturbation and Black-box Attack

Chaoning Zhang et al.

Summary: Our research focuses on providing an alternative explanation for the phenomenon of untargeted UAP, aiming to reduce dependence on original training samples and exploring the potential for data-free black-box attacks. Our work proposes utilizing artificial Jigsaw images as training samples for competitive performance in universal adversarial perturbations.

2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) (2021)

Article Computer Science, Theory & Methods

A game-based approximate verification of deep neural networks with provable guarantees

Min Wu et al.

THEORETICAL COMPUTER SCIENCE (2020)

Proceedings Paper Computer Science, Artificial Intelligence

Generalizing Universal Adversarial Attacks Beyond Additive Perturbations

Yanghao Zhang et al.

20TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2020) (2020)

Article Computer Science, Artificial Intelligence

Generalizable Data-Free Objective for Crafting Universal Adversarial Perturbations

Konda Reddy Mopuri et al.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2019)

Proceedings Paper Computer Science, Artificial Intelligence

Generation of Low Distortion Adversarial Attacks via Convex Programming

Tianyun Zhang et al.

2019 19TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2019) (2019)

Proceedings Paper Computer Science, Artificial Intelligence

Defending Against Universal Perturbations With Shared Adversarial Training

Chaithanya Kumar Mummadi et al.

2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019) (2019)

Proceedings Paper Computer Science, Artificial Intelligence

NAG: Network for Adversary Generation

Konda Reddy Mopuri et al.

2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) (2018)

Proceedings Paper Computer Science, Artificial Intelligence

Identify Susceptible Locations in Medical Records via Adversarial Attacks on Deep Predictive Models

Mengying Sun et al.

KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING (2018)

Proceedings Paper Computer Science, Artificial Intelligence

Universal Adversarial Perturbations Against Semantic Image Segmentation

Jan Hendrik Metzen et al.

2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV) (2017)

Proceedings Paper Computer Science, Artificial Intelligence

Image-to-Image Translation with Conditional Adversarial Networks

Phillip Isola et al.

30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017) (2017)

Proceedings Paper Computer Science, Information Systems

Towards Evaluating the Robustness of Neural Networks

Nicholas Carlini et al.

2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP) (2017)

Article Computer Science, Artificial Intelligence

ImageNet Large Scale Visual Recognition Challenge

Olga Russakovsky et al.

INTERNATIONAL JOURNAL OF COMPUTER VISION (2015)

Article Computer Science, Artificial Intelligence

Image quality assessment: From error visibility to structural similarity

Z Wang et al.

IEEE TRANSACTIONS ON IMAGE PROCESSING (2004)