4.6 Article

Attacking Deep Reinforcement Learning With Decoupled Adversarial Policy

Related references

Note: Only part of the references are listed.
Article Computer Science, Artificial Intelligence

ME-MADDPG: An efficient learning-based motion planning method for multiple agents in complex environments

Kaifang Wan et al.

Summary: This paper proposes a new ME-MADDPG algorithm to improve the efficiency and adaptability of multiagent motion planning methods by introducing a mixed experience strategy. Experimental results demonstrate that the proposed algorithm significantly enhances convergence speed and effectiveness in training compared to traditional MADDPG, showing better performance in complex dynamic environments.

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS (2022)

Article Engineering, Civil

Interpretable End-to-End Urban Autonomous Driving With Latent Deep Reinforcement Learning

Jianyu Chen et al.

Summary: This article introduces an interpretable deep reinforcement learning method for end-to-end autonomous driving, which utilizes a sequential latent environment model for handling complex urban scenarios and significantly reducing the sample complexity of reinforcement learning. Comparative tests in a realistic driving simulator demonstrate that the method outperforms many baseline models including DQN, DDPG, TD3, and SAC in crowded urban environments.

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS (2022)

Article Computer Science, Software Engineering

Towards Security Threats of Deep Learning Systems: A Survey

Yingzhe He et al.

Summary: Deep learning has gained tremendous success and popularity, but it also suffers from inherent security weaknesses. To develop robust deep learning systems, it is important to investigate and analyze the attacks against deep learning. We focus on four types of attacks and study their workflows, adversary capabilities, and attack goals. Through quantitative and qualitative analysis, we have identified significant findings.

IEEE TRANSACTIONS ON SOFTWARE ENGINEERING (2022)

Article Computer Science, Hardware & Architecture

Taking Care of the Discretization Problem: A Comprehensive Study of the Discretization Problem and a Black-Box Adversarial Attack in Discrete Integer Domain

Lei Bu et al.

Summary: Neural network based classifiers are vulnerable to adversarial examples where slight perturbations to benign images can cause false predictions. The discretization problem arises when adversarial examples crafted in a continuous domain become benign after being denormalized back into a discrete integer domain. By proposing a novel optimization method, the study shows the potential of discrete optimization algorithms in crafting effective black-box attacks.

IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING (2022)

Article Computer Science, Artificial Intelligence

Minimalistic Attacks: How Little It Takes to Fool Deep Reinforcement Learning Policies

Xinghua Qu et al.

Summary: Recent studies show that neural-network-based policies can be easily fooled by adversarial examples. This article explores the limits of a model's vulnerability by defining three key settings for minimalistic attacks and testing their potency on six Atari games. The findings reveal significant performance degradation and deception of state-of-the-art policies by minimal perturbations.

IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS (2021)

Article Computer Science, Artificial Intelligence

Querying little is enough: Model inversion attack via latent information

Kanghua Mo et al.

Summary: With the advancement of machine learning technologies, online intelligent services utilize ML models to provide predictions, facing the risk of model inversion attacks. A novel MIA scheme leveraging latent information extracted by an auxiliary neural network as high-dimensional features is proposed, making it more challenging for administrators to defend against the attack and elicit more privacy-preserving investigations.

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS (2021)

Article Computer Science, Artificial Intelligence

Adversarial attacks on text classification models using layer-wise relevance propagation

Jincheng Xu et al.

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS (2020)

Article Computer Science, Artificial Intelligence

An efficient framework for generating robust adversarial examples

Lili Zhang et al.

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS (2020)

Article Computer Science, Artificial Intelligence

Simple Iterative Method for Generating Targeted Universal Adversarial Perturbations

Hokuto Hirano et al.

ALGORITHMS (2020)

Article Computer Science, Artificial Intelligence

Combining Planning and Deep Reinforcement Learning in Tactical Decision Making for Autonomous Driving

Carl-Johan Hoel et al.

IEEE TRANSACTIONS ON INTELLIGENT VEHICLES (2020)

Proceedings Paper Computer Science, Information Systems

Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks

Kenneth T. Co et al.

PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19) (2019)

Proceedings Paper Computer Science, Information Systems

Towards Evaluating the Robustness of Neural Networks

Nicholas Carlini et al.

2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP) (2017)

Article Multidisciplinary Sciences

Human-level control through deep reinforcement learning

Volodymyr Mnih et al.

NATURE (2015)

Article Computer Science, Artificial Intelligence

The Arcade Learning Environment: An Evaluation Platform for General Agents

Marc G. Bellemare et al.

JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH (2013)