4.6 Article

Improving in-memory file system reading performance by fine-grained user-space cache mechanisms

期刊

JOURNAL OF SYSTEMS ARCHITECTURE
卷 115, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.sysarc.2021.101994

关键词

Distributed file system; Cache policy; Submodular optimization; Distributed system

资金

  1. National Key R&D program of China [2019YFC1711000]
  2. National Natural Science Foundation of China [62072230, 61702254, U1811461]
  3. Jiangsu Province Industry Support Program [BE2017155]
  4. Collaborative Innovation Center of Novel Software Technology and Industrialization, Jiangsu, China

向作者/读者索取更多资源

This paper proposes a two-layer user space cache management mechanism for improving the performance of distributed in-memory file systems. Experimental results show that the proposed caching strategies can significantly enhance reading performance and outperform existing cache algorithms. The idea of the client-side caching framework has been adopted by the Alluxio open source community, demonstrating practical benefits.
Nowadays, as the memory capacity of servers become larger and larger, distributed in-memory file systems, which enable applications to interact with data at fast speed, have been widely used. However, the existing distributed in-memory file systems still face the problem of low data access performance in small data reading, which seriously reduce their usefulness in many important big data scenarios. In this paper, we analyze the factors that affect the performance of reading in-memory files and propose a two-layer user space cache management mechanism: in the first layer, we cache data packet references to reduce frequent page fault interruptions (packet-level cache); in the second layer, we cache and manage small file data units to avoid redundant inter-process communications (object-level cache). We further design a fine-grained caching model based on the submodular function optimization theory, for efficiently managing the variable-length cache units with partially overlapping fragments on the client side. Experimental results on synthetic and real-world workloads show that compared with the existing cutting-edge systems, the first level cache can double the reading performance on average, and the second level cache can improve random reading performance by more than 4 times. Our caching strategies also outperform the cutting-edge cache algorithms over 20% on hit ratio. Furthermore, the proposed client-side caching framework idea has been adopted by the Alluxio open source community, which shows the practical benefits of this work.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据