4.6 Article

A smart admission control and cache replacement approach in content delivery networks

Publisher

SPRINGER
DOI: 10.1007/s10586-023-04095-7

Keywords

Smart caching policies; Reinforcement learning; Deep learning; Probability prediction; Cache hit ratio

Ask authors/readers for more resources

Content Delivery Networks (CDNs) are currently the main method to distribute data traffic. Their content caching system relies on storing objects in a network of servers to minimize latency and provide requested content to users. This study proposes a methodology that includes an admission control phase and a cache replacement phase. The admission control phase uses reinforcement learning algorithms to determine which requests to accept based on training and maximizing the hit ratio. The cache replacement phase utilizes predictive models, such as the Long-Short-Term Memory (LSTM) model, to estimate an object's future popularity and make decisions on caching and evicting objects. Experimental results demonstrate that the proposed methodology outperforms conventional replacement policies and machine learning-based algorithms with a cache size of 130.
Content Delivery Networks (CDNs) distribute most data traffic nowadays by caching the contents in a network of servers to provide users with the requested objects, and helping to reduce latency when delivering contents to the user. The content caching system performance depends upon many factors such as where the objects should be stored, which object to store, and when to cache them. The proposed methodology includes two main phases: an admission control phase and a cache replacement phase. The admission control phase is responsible for accepting or rejecting the incoming request based on training the Reinforcement Learning (RL) algorithm to make the best decision in the near future to maximize its reward, which, in this case, is the hit ratio. The cache replacement phase estimates the object's future popularity. This is achieved by building a predictive model based on the popularity prediction mechanism, where the Long-Short-Term Memory (LSTM) model is used to compute the object's popularity. The LSTM model's outcome can help decide which objects to cache and which objects to evict from the cache. The proposed methodology is tested on a dataset to demonstrate its effectiveness in enhancing the hit ratio compared to conventional replacement policies such as First-in-First-Out (FIFO), Least Recently Used (LRU), Least Frequently Used (LFU) and a recent machine learning-based algorithm. The experimental results on the dataset revealed that the proposed methodology outperformed the baseline algorithms by 34.7% to 97.17% with a cache size of 130.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available