Journal
PROCEEDINGS OF THE FOURTEENTH EUROSYS CONFERENCE 2019 (EUROSYS '19)
Volume -, Issue -, Pages -Publisher
ASSOC COMPUTING MACHINERY
DOI: 10.1145/3302424.3303979
Keywords
-
Funding
- JSPS Research Fellowship
- JSPS KAKENHI [JP17J02958]
- Swiss National Science Foundation [200021_166132]
- Leverhulme Trust [ECF-2016-289]
- Isaac Newton Trust
- Western Digital
- Swiss National Science Foundation (SNF) [200021_166132] Funding Source: Swiss National Science Foundation (SNF)
Ask authors/readers for more resources
Programmable network hardware can run services traditionally deployed on servers, resulting in orders-of-magnitude improvements in performance. Yet, despite these performance improvements, network operators remain skeptical of in-network computing. The conventional wisdom is that the operational costs from increased power consumption outweigh any performance benefits. Unless in-network computing can justify its costs, it will be disregarded as yet another academic exercise. In this paper, we challenge that assumption, by providing a detailed power analysis of several in-network computing use cases. Our experiments show that in-network computing can be extremely power-efficient. In fact, for a single watt, a software system on commodity CPU can be improved by a factor of x100 using an FPGA, and a factor of x1000 utilizing ASIC implementations. However, this efficiency depends on the system load. To address changing workloads, we propose in-network computing on demand, where services can be dynamically moved between servers and the network. By shifting the placement of services on-demand, data centers can optimize for both performance and power efficiency.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available