期刊
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
卷 30, 期 4, 页码 800-813出版社
IEEE COMPUTER SOC
DOI: 10.1109/TPDS.2018.2870389
关键词
Auto-scaling; elasticity; workload forecasting; service demand estimation; IaaS cloud; benchmarking; metrics
资金
- German Research Foundation (DFG) [KO 3445/11-1]
- Swedish Research Council (VR) under the project Cloud Control
- Research Group of the Standard Performance Evaluation Corporation (SPEC)
Auto-scalers for clouds promise stable service quality at low costs when facing changing workload intensity. The major public cloud providers provide trigger-based auto-scalers based on thresholds. However, trigger-based auto-scaling has reaction times in the order of minutes. Novel auto-scalers from literature try to overcome the limitations of reactive mechanisms by employing proactive prediction methods. However, the adoption of proactive auto-scalers in production is still very low due to the high risk of relying on a single proactive method. This paper tackles the challenge of reducing this risk by proposing a new hybrid auto-scaling mechanism, called Chameleon, combining multiple different proactive methods coupled with a reactive fallback mechanism. Chameleon employs on-demand, automated time series-based forecasting methods to predict the arriving load intensity in combination with run-time service demand estimation to calculate the required resource consumption per work unit without the need for application instrumentation. We benchmark Chameleon against five different state-of-the-art proactive and reactive auto-scalers one in three different private and public cloud environments. We generate five different representative workloads each taken from different real-world system traces. Overall, Chameleon achieves the best scaling behavior based on user and elasticity performance metrics, analyzing the results from 400 hours aggregated experiment time.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据