4.6 Article

Efficient Large-Capacity Caching in Cloud Storage Using Skip-Gram-Based File Correlation Analysis

Journal

IEEE ACCESS
Volume 11, Issue -, Pages 111265-111273

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2023.3322725

Keywords

Cache strategy; cloud storage; file correlation; hit rate; machine learning; prefetching

Ask authors/readers for more resources

Designing a high-capacity cache is crucial for improving accessibility of cloud storage. This study introduces a file similarity strategy based on skip-gram to optimize caching and prefetching in cloud storage. By judging the correlation between files, this strategy allows for efficient prefetching and replacement in the cache. The use of this prefetching strategy significantly improves cache hit rate and consumes minimal time during online operations.
Designing a high-capacity cache is an essential means of improving the accessibility of cloud storage. Compared with traditional data access, cloud storage data access presents new patterns, and traditional caching strategies cannot handle the prefetching and replacement of non-hot data very well. Numerous studies have shown that file correlation can optimize cloud storage's caching and prefetching strategies. However, characterizing the correlation between files from multiple dimensions is quite complex, and the difficulty of optimizing cloud storage caching using file correlation increases accordingly. Based on the above shortcomings, this study designed a file similarity strategy based on skip-gram from the analysis of user access. This strategy completes the prefetching and replacing files in a high-capacity cache by judging the correlation between files. The strategy prefetches files and dynamically inserts them into the cache by judging the correlation between files. After using the prefetching strategy, we significantly improve the cache hit rate in the simulation benchmark. In addition, the strategy can establish an index table after each training completion, which consumes very little time during online operations. During training, the time required to establish the index is $O(N*log(V))$ , and the time for indexing is $O(1)$ .

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available