4.3 Article

GPUfs: Integrating a File System with GPUs

Journal

ACM TRANSACTIONS ON COMPUTER SYSTEMS
Volume 32, Issue 1, Pages -

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/2553081

Keywords

Performance; Accelerators; operating systems; operating systems design; GPGPUs; file systems

Funding

  1. NSF [CNS-1017785, CNS-1017206]
  2. Andrew and Erna Fince Viterbi Fellowship
  3. NVIDIA research award
  4. Division Of Computer and Network Systems
  5. Direct For Computer & Info Scie & Enginr [1017206] Funding Source: National Science Foundation
  6. Division of Computing and Communication Foundations
  7. Direct For Computer & Info Scie & Enginr [1333594] Funding Source: National Science Foundation

Ask authors/readers for more resources

As GPU hardware becomes increasingly general-purpose, it is quickly outgrowing the traditional, constrained GPU-as-coprocessor programming model. This article advocates for extending standard operating system services and abstractions to GPUs in order to facilitate program development and enable harmonious integration of GPUs in computing systems. As an example, we describe the design and implementation of GPUfs, a software layer which provides operating system support for accessing host files directly from GPU programs. GPUfs provides a POSIX-like API, exploits GPU parallelism for efficiency, and optimizes GPU file access by extending the host CPU's buffer cache into GPU memory. Our experiments, based on a set of real benchmarks adapted to use our file system, demonstrate the feasibility and benefits of the GPUfs approach. For example, a self-contained GPU program that searches for a set of strings throughout the Linux kernel source tree runs over seven times faster than on an eight-core CPU.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.3
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available