3.8 Proceedings Paper

Knowledge Guided Two-player Reinforcement Learning for Cyber Attacks and Defenses

出版社

IEEE COMPUTER SOC
DOI: 10.1109/ICMLA55696.2022.00213

关键词

-

资金

  1. National Security Agency
  2. National Science Foundation [2114892]

向作者/读者索取更多资源

Cyber defense exercises are crucial for understanding the technical capacity of organizations in facing cyber-threats and discovering unknown vulnerabilities for better defense mechanisms. This paper introduces a two-player game-based reinforcement learning environment that improves the performance of both attacker and defender agents. The convergence of the agents is accelerated through expert knowledge from Cybersecurity Knowledge Graphs.
Cyber defense exercises are an important avenue to understand the technical capacity of organizations when faced with cyber-threats. Information derived from these exercises often leads to finding unseen methods to exploit vulnerabilities in an organization. These often lead to better defense mechanisms that can counter previously unknown exploits. With recent developments in cyber battle simulation platforms, we can generate a defense exercise environment and train reinforcement learning (RL) based autonomous agents to attack the system described by the simulated environment. In this paper, we describe a two-player game-based RL environment that simultaneously improves the performance of both the attacker and defender agents. We further accelerate the convergence of the RL agents by guiding them with expert knowledge from Cybersecurity Knowledge Graphs on attack and mitigation steps. We have implemented and integrated our proposed approaches into the CyberBattleSim system.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据