3.8 Proceedings Paper

The Case For In-Network Computing On Demand

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3302424.3303979

Keywords

-

Funding

  1. JSPS Research Fellowship
  2. JSPS KAKENHI [JP17J02958]
  3. Swiss National Science Foundation [200021_166132]
  4. Leverhulme Trust [ECF-2016-289]
  5. Isaac Newton Trust
  6. Western Digital
  7. Swiss National Science Foundation (SNF) [200021_166132] Funding Source: Swiss National Science Foundation (SNF)

Ask authors/readers for more resources

Programmable network hardware can run services traditionally deployed on servers, resulting in orders-of-magnitude improvements in performance. Yet, despite these performance improvements, network operators remain skeptical of in-network computing. The conventional wisdom is that the operational costs from increased power consumption outweigh any performance benefits. Unless in-network computing can justify its costs, it will be disregarded as yet another academic exercise. In this paper, we challenge that assumption, by providing a detailed power analysis of several in-network computing use cases. Our experiments show that in-network computing can be extremely power-efficient. In fact, for a single watt, a software system on commodity CPU can be improved by a factor of x100 using an FPGA, and a factor of x1000 utilizing ASIC implementations. However, this efficiency depends on the system load. To address changing workloads, we propose in-network computing on demand, where services can be dynamically moved between servers and the network. By shifting the placement of services on-demand, data centers can optimize for both performance and power efficiency.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available