期刊
出版社
ASSOC COMPUTING MACHINERY
DOI: 10.1145/3302424.3303979
关键词
-
资金
- JSPS Research Fellowship
- JSPS KAKENHI [JP17J02958]
- Swiss National Science Foundation [200021_166132]
- Leverhulme Trust [ECF-2016-289]
- Isaac Newton Trust
- Western Digital
- Swiss National Science Foundation (SNF) [200021_166132] Funding Source: Swiss National Science Foundation (SNF)
Programmable network hardware can run services traditionally deployed on servers, resulting in orders-of-magnitude improvements in performance. Yet, despite these performance improvements, network operators remain skeptical of in-network computing. The conventional wisdom is that the operational costs from increased power consumption outweigh any performance benefits. Unless in-network computing can justify its costs, it will be disregarded as yet another academic exercise. In this paper, we challenge that assumption, by providing a detailed power analysis of several in-network computing use cases. Our experiments show that in-network computing can be extremely power-efficient. In fact, for a single watt, a software system on commodity CPU can be improved by a factor of x100 using an FPGA, and a factor of x1000 utilizing ASIC implementations. However, this efficiency depends on the system load. To address changing workloads, we propose in-network computing on demand, where services can be dynamically moved between servers and the network. By shifting the placement of services on-demand, data centers can optimize for both performance and power efficiency.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据