4.7 Article

Deep-attack over the deep reinforcement learning

Journal

KNOWLEDGE-BASED SYSTEMS
Volume 250, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.knosys.2022.108965

Keywords

Adversarial attack; Deep reinforcement learning; Adversarial training

Funding

  1. National Natural Science Foundation of China [62103330]
  2. Fundamental Re-search Funds for the Central Universities of China [3102021ZD-HQD09]

Ask authors/readers for more resources

Recent developments in adversarial attacks have made reinforcement learning more vulnerable. The key challenge lies in choosing the right timing for the attack. Existing approaches struggle with designing evaluation functions and lack appropriate assessment indicators. To address these issues and make attacks more intelligent, a reinforcement learning-based attacking framework is proposed along with a novel evaluation metric. Experimental results demonstrate the effectiveness of the proposed model and the goodness of the evaluation metric. Furthermore, the model's transferability and robustness under adversarial training are validated.
Recent adversarial attack developments have made reinforcement learning more vulnerable, and different approaches exist to deploy attacks against it, where the key is how to choose the right timing of the attack. Some work tries to design an attack evaluation function to select critical points that will be attacked if the value is greater than a certain threshold. This approach makes it difficult to find the right place to deploy an attack without considering the long-term impact. In addition, there is a lack of appropriate indicators of assessment during attacks. To make the attacks more intelligent as well as to remedy the existing problems, we propose the reinforcement learning-based attacking framework by considering the effectiveness and stealthy spontaneously, while we also propose a new metric to evaluate the performance of the attack model in these two aspects. Experimental results show the effectiveness of our proposed model and the goodness of our proposed evaluation metric. Furthermore, we validate the transferability of the model, and also its robustness under the adversarial training. (C) 2022 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available