4.6 Article

Improving in-memory file system reading performance by fine-grained user-space cache mechanisms

Journal

JOURNAL OF SYSTEMS ARCHITECTURE
Volume 115, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.sysarc.2021.101994

Keywords

Distributed file system; Cache policy; Submodular optimization; Distributed system

Funding

  1. National Key R&D program of China [2019YFC1711000]
  2. National Natural Science Foundation of China [62072230, 61702254, U1811461]
  3. Jiangsu Province Industry Support Program [BE2017155]
  4. Collaborative Innovation Center of Novel Software Technology and Industrialization, Jiangsu, China

Ask authors/readers for more resources

This paper proposes a two-layer user space cache management mechanism for improving the performance of distributed in-memory file systems. Experimental results show that the proposed caching strategies can significantly enhance reading performance and outperform existing cache algorithms. The idea of the client-side caching framework has been adopted by the Alluxio open source community, demonstrating practical benefits.
Nowadays, as the memory capacity of servers become larger and larger, distributed in-memory file systems, which enable applications to interact with data at fast speed, have been widely used. However, the existing distributed in-memory file systems still face the problem of low data access performance in small data reading, which seriously reduce their usefulness in many important big data scenarios. In this paper, we analyze the factors that affect the performance of reading in-memory files and propose a two-layer user space cache management mechanism: in the first layer, we cache data packet references to reduce frequent page fault interruptions (packet-level cache); in the second layer, we cache and manage small file data units to avoid redundant inter-process communications (object-level cache). We further design a fine-grained caching model based on the submodular function optimization theory, for efficiently managing the variable-length cache units with partially overlapping fragments on the client side. Experimental results on synthetic and real-world workloads show that compared with the existing cutting-edge systems, the first level cache can double the reading performance on average, and the second level cache can improve random reading performance by more than 4 times. Our caching strategies also outperform the cutting-edge cache algorithms over 20% on hit ratio. Furthermore, the proposed client-side caching framework idea has been adopted by the Alluxio open source community, which shows the practical benefits of this work.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available