4.6 Article

Hadoop Data Reduction Framework: Applying Data Reduction at the DFS Layer

期刊

IEEE ACCESS
卷 9, 期 -, 页码 152704-152717

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2021.3127499

关键词

Codes; Software; Dictionaries; Redundancy; Libraries; Big Data; File systems; Data compression; data deduplication; distributed file system; Hadoop; HDFS

向作者/读者索取更多资源

Big data processing systems like Hadoop often use distributed file systems (DFS) and require data reduction schemes for efficient storage. Existing software may not cover all data types, so this paper suggests a modular DFS design to simplify scheme usage. The proposed approach allows easy addition of new schemes, requires minimal modification to current DFS, and enables separate compilation of schemes without needing the source code of the DFS or FS.
Big-data processing systems such as Hadoop, which usually utilize distributed file systems (DFSs), require data reduction schemes to maximize storage space efficiency. These schemes have different tradeoffs, and there are no all-purpose schemes applicable to all data. Users must select a suitable scheme in accordance with their data. To accommodate this requirement, application software or file system (FS) have a fixed selection of these schemes. However, these provided schemes are insufficient for all data types, and when novel schemes emerge, extending the selection can be problematic. If the source code of the application or FS is available, the source code could potentially be extended with extensive labor, but could be virtually impossible without the code maintainers' assistance. If the source code is unavailable, there is no way to tackle the problem. This paper proposes an unexplored solution through a modular DFS design that eases data reduction scheme usage through existing programming techniques. The advantages of this presented approach are threefold. First, adding new schemes is easy and they are transparent to the application code requiring no extensions to it. Second, the modular structure requires minimal modification to the existing DFSs and performance overhead. Third, users can compile schemes separately from the DFS without the FS or DFS source code. To demonstrate the design's effectiveness, we implemented it by minimally extending the Hadoop DFS (HDFS) and named it the Hadoop Data Reduction Framework (HDRF). We designed HDRF to work with minimal overhead and tested it extensively. Experimental results indicate that it has negligible overhead over existing approaches. In a number of cases, it can offer up to 48.96% higher throughput while achieving the best result in storage reduction within our tested setups because of the incorporated data reduction schemes.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据