4.5 Article

A GPU-based machine learning approach for detection of botnet attacks

期刊

COMPUTERS & SECURITY
卷 123, 期 -, 页码 -

出版社

ELSEVIER ADVANCED TECHNOLOGY
DOI: 10.1016/j.cose.2022.102918

关键词

Internet of Things; Machine learning; Random forest; Feature selection; Attack detection; Classification

资金

  1. Zayed university, United Arab Emirates [R20093, R20090]

向作者/读者索取更多资源

The rapid development and widespread adoption of the Internet of Things (IoT) has brought new security challenges. There are numerous IoT devices with underlying security vulnerabilities, making them susceptible to malware attacks due to insufficient device authentication/authorization. IoT botnets are designed to exploit these unsecure devices and networks. This paper presents a methodology for the pre-processing and classification of the IoT-Bot dataset, utilizing GPU acceleration for training and evaluating models. The proposed methodology achieves high scores for accuracy, precision, recall, and f1-score, while significantly reducing the training and estimation times.
Rapid development and adaptation of the Internet of Things (IoT) has created new problems for secur-ing these interconnected devices and networks. There are hundreds of thousands of IoT devices with underlying security vulnerabilities, such as insufficient device authentication/authorisation making them vulnerable to malware infection. IoT botnets are designed to grow and compete with one another over unsecure devices and networks. Once infected, the device will monitor a Command-and-Control (C&C) server indicating the target of an attack via Distributed Denial of Service (DDoS) attack. These security issues, coupled with the continued growth of IoT, presents a much larger attack surface for attackers to exploit in their attempts to disrupt or gain unauthorized access to networks, systems, and data. Large datasets available online provide good benchmarks for the development of accurate solutions for botnet detection, however model training is often a time-consuming process. Interestingly, significant advance-ment of GPU technology allows shortening the time required to train such large and complex models. This paper presents a methodology for the pre-processing of the IoT-Bot dataset and classification of various attack types included. We include descriptions of pre-processing actions conducted to prepare data for training and a comparison of results achieved with GPU accelerated versions of Random Forest, k-Nearest Neighbour, Support Vector Machine (SVM) and Logistic Regression classifiers from the cuML library. Using our methodology, the best-trained models achieved at least 0.99 scores for accuracy, precision, recall and f1-score. Moreover, the application of feature selection and training models on GPU significantly reduced the training and estimation times.(c) 2022 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license( http://creativecommons.org/licenses/by-nc-nd/4.0/ )

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据