4.5 Article

Time series-based workload prediction using the statistical hybrid model for the cloud environment

期刊

COMPUTING
卷 105, 期 2, 页码 353-374

出版社

SPRINGER WIEN
DOI: 10.1007/s00607-022-01129-7

关键词

Auto-regression integrated moving average (ARIMA); Artificial neural network (ANN); Savitzky-Golay filter; Time series forecasting; CPU; Memory usage; Cloud computing

向作者/读者索取更多资源

Resource management in a cloud setting is effectively achieved by predicting CPU and memory utilization using a hybrid ARIMA-ANN model. The combination of linear and nonlinear components in the prediction helps improve accuracy, and the introduction of a range of values reduces forecasting errors. OER and UER are used to cope with the error produced by over or under estimation of resource utilization.
Resource management is addressed using infrastructure as a service. On demand, the resource management module effectively manages available resources. Resource management in cloud resource provisioning is aided by the prediction of central processing unit (CPU) and memory utilization. Using a hybrid ARIMA-ANN model, this study forecasts future CPU and memory utilization. The range of values discovered is utilized to make predictions, which is useful for resource management. In the cloud traces, the ARIMA model detects linear components in the CPU and memory utilization patterns. For recognizing and magnifying nonlinear components in the traces, the artificial neural network (ANN) leverages the residuals derived from the ARIMA model. The resource utilization patterns are predicted using a combination of linear and nonlinear components. From the predicted and previous history values, the Savitzky-Golay filter finds a range of forecast values. Point value forecasting may not be the best method for predicting multi-step resource utilization in a cloud setting. The forecasting error can be decreased by introducing a range of values, and we employ as reported by Engelbrecht HA and van Greunen M (in: Network and Service Management (CNSM), 2015 11th International Conference, 2015) OER (over estimation rate) and UER (under estimation rate) to cope with the error produced by over or under estimation of CPU and memory utilization. The prediction accuracy is tested using statistical-based analysis using Google's 29-day trail and BitBrain (BB).

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据