4.6 Article

Robust Botnet DGA Detection: Blending XAI and OSINT for Cyber Threat Intelligence Sharing

期刊

IEEE ACCESS
卷 10, 期 -, 页码 34613-34624

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2022.3162588

关键词

Botnet; Computational modeling; Artificial intelligence; Entropy; Servers; Deep learning; Blocklists; Adversarial machine learning; botnet; cybersecurity; DGA; explainable artificial intelligence; threat intelligence

向作者/读者索取更多资源

In this study, the researchers investigated 12 years of DNS query logs from their campus network and discovered the presence of malicious botnet domain generation algorithm (DGA) traffic. They found that DGA-based botnets are difficult to detect using traditional cyber threat intelligence systems and proposed the use of AI/machine learning-based systems for improved detection. The researchers developed a model to detect DGA-based traffic using statistical features and discussed the expansion of CTI using computable CTI paradigm. They also explored methods to improve the explainability of the model outputs using explainable AI (XAI) and open-source intelligence (OSINT). Experimental results showed the effectiveness of their models and the superiority of their random forest model against adversarial attacks compared to other deep learning models. The researchers demonstrated the potential of XAI-OSINT blending in improving trust for CTI sharing and validating computable CTI paradigm.
We investigated 12 years DNS query logs of our campus network and identified phenomena of malicious botnet domain generation algorithm (DGA) traffic. DGA-based botnets are difficult to detect using cyber threat intelligence (CTI) systems based on blocklists. Artificial intelligence (AI)/machine learning (ML)-based CTI systems are required. This study (1) proposed a model to detect DGA-based traffic based on statistical features with datasets comprising 55 DGA families, (2) discussed how CTI can be expanded with computable CTI paradigm, and (3) described how to improve the explainability of the model outputs by blending explainable AI (XAI) and open-source intelligence (OSINT) for trust problems, an antidote for skepticism to the shared models and preventing automation bias. We define the XAI-OSINT blending as aggregations of OSINT for AI/ML model outcome validation. Experimental results show the effectiveness of our models (96.3% accuracy). Our random forest model provides better robustness against three state-of-the-art DGA adversarial attacks (CharBot, DeepDGA, MaskDGA) compared with character-based deep learning models (Endgame, CMU, NYU, MIT). We demonstrate the sharing mechanism and confirm that the XAI-OSINT blending improves trust for CTI sharing as evidence to validate our proposed computable CTI paradigm to assist security analysts in security operations centers using an automated, explainable OSINT approach (for second opinion). Therefore, the computable CTI reduces manual intervention in critical cybersecurity decision-making.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据