4.6 Article

Technical Study of Deep Learning in Cloud Computing for Accurate Workload Prediction

期刊

ELECTRONICS
卷 12, 期 3, 页码 -

出版社

MDPI
DOI: 10.3390/electronics12030650

关键词

deep learning; workload prediction; cloud computing; machine learning

向作者/读者索取更多资源

Proactive resource management in Cloud Services is important for cost effectiveness and addressing issues such as SLA violations and resource provisioning. Workload prediction using Deep Learning (DL) is popular for analyzing cloud environment data, but the quality of the training data influences the model's performance. Existing works in this domain often lack uniformity in data sources, leading to decreased efficacy of DL models. In this study, DL models are used to analyze real-world workloads from SWF, and the LSTM model exhibits the best performance. The paper also addresses the lack of literature on DL in workload prediction in cloud computing environments.
Proactive resource management in Cloud Services not only maximizes cost effectiveness but also enables issues such as Service Level Agreement (SLA) violations and the provisioning of resources to be overcome. Workload prediction using Deep Learning (DL) is a popular method of inferring complicated multidimensional data of cloud environments to meet this requirement. The overall quality of the model depends on the quality of the data as much as the architecture. Therefore, the data sourced to train the model must be of good quality. However, existing works in this domain have either used a singular data source or have not taken into account the importance of uniformity for unbiased and accurate analysis. This results in the efficacy of DL models suffering. In this paper, we provide a technical analysis of using DL models such as Recurrent Neural Networks (RNN), Multilayer Perception (MLP), Long Short-Term Memory (LSTM), and, Convolutional Neural Networks (CNN) to exploit the time series characteristics of real-world workloads from the Parallel Workloads Archive of the Standard Workload Format (SWF) with the aim of conducting an unbiased analysis. The robustness of these models is evaluated using the Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) error metrics. The findings of these highlight that the LSTM model exhibits the best performance compared to the other models. Additionally, to the best of our knowledge, insights of DL in workload prediction of cloud computing environments is insufficient in the literature. To address these challenges, we provide a comprehensive background on resource management and load prediction using DL. Then, we break down the models, error metrics, and data sources across different bodies of work.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据