Journal
SC22: INTERNATIONAL CONFERENCE FOR HIGH PERFORMANCE COMPUTING, NETWORKING, STORAGE AND ANALYSIS
Volume -, Issue -, Pages -Publisher
IEEE
DOI: 10.1109/SC41404.2022.00015
Keywords
File systems; next generation networking
Categories
Funding
- European Project RED-SEA [955776]
- European Project DEEP-SEA [955606]
- ETH Postdoctoral Fellowship [19-2 FEL-50]
Ask authors/readers for more resources
Storage systems in high-performance clusters and datacenters require improved performance. Remote direct memory access (RDMA) and SmartNICs have been used to accelerate data flow and offload storage policies, resulting in improved latency and CPU utilization.
High-performance clusters and datacenters pose increasingly demanding requirements on storage systems. If these systems do not operate at scale, applications are doomed to become I/O bound and waste compute cycles. To accelerate the data path to remote storage nodes, remote direct memory access (RDMA) has been embraced by storage systems to let data flow from the network to storage targets, reducing overall latency and CPU utilization. Yet, this approach still involves CPUs on the data path to enforce storage policies such as authentication, replication, and erasure coding. We show how storage policies can be offloaded to fully programmable SmartNICs, without involving host CPUs. By using PsPIN, an open-hardware SmartNIC, we show latency improvements for writes (up to 2x), data replication (up to 2x), and erasure coding (up to 2x), when compared to respective CPU- and RDMA-based alternatives.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available