4.7 Article

Modeling throughput sampling size for a cloud-hosted data scheduling and optimization service

出版社

ELSEVIER
DOI: 10.1016/j.future.2013.01.003

关键词

Distributed systems; Optimization; Network protocols; Distributed applications

资金

  1. Direct For Computer & Info Scie & Enginr
  2. Division of Computing and Communication Foundations [1115805] Funding Source: National Science Foundation
  3. Direct For Computer & Info Scie & Enginr
  4. Office of Advanced Cyberinfrastructure (OAC) [0926701] Funding Source: National Science Foundation

向作者/读者索取更多资源

As big-data processing and analysis dominates the usage of the Cloud systems, the need for Cloud-hosted data scheduling and optimization services increases. One key component for such a service is to provide available bandwidth and achievable throughput estimation capabilities, since all scheduling and optimization decisions would be built on top of this information. The biggest challenge in providing these estimation capabilities is the dynamic decision of what proportion of the actual dataset, when transferred, would give us an accurate estimate of the bandwidth and throughput achieved by transferring the whole data set. That proportion of data is called the sampling size (or the probe size). Although small fixed sample sizes worked well for high-latency low-bandwidth networks in the past, high-bandwidth networks require much larger and more dynamic sample sizes, since an accurate estimation now also depends on how fast the transfer protocol can saturate that fat network link. In this study, we present a model to decide the optimal sampling size based on the data size and estimated capacity of the network. Our results show that the predicted sampling size is very accurate compared to the targeted best sampling size for a certain file transfer in a majority of the cases.(C) 2013 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据