4.7 Article

A Stable Deep Reinforcement Learning Framework for Recommendation

Journal

IEEE INTELLIGENT SYSTEMS
Volume 37, Issue 3, Pages 76-84

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/MIS.2022.3145503

Keywords

Reinforcement learning; Data models; Intelligent systems; Training data; Entropy; Stability analysis; Optimization

Funding

  1. Provincial Natural Science Foundation of Shaanxi of China [2019JZ-26]
  2. National Natural Science Foundation of China [61876141, 61373111]

Ask authors/readers for more resources

This article proposes a stable reinforcement learning framework for recommendation systems, addressing previous research limitations by redefining the Markov decision process of RL and introducing a stable module. The experiments confirm the effectiveness of the proposed algorithm.
Recommender system (RS) solves the problem of information overload, which is crucial in industrial fields. Recently, reinforcement learning (RL) combined with RS has attracted researchers' attention. These new methods model the interaction between RS and users as a process of serialization decision-making. However, these studies suffer from several disadvantages: 1) they fail to model the accumulated long-term interest tied to high reward, and 2) these algorithms need a lot of interactive data to learn a good strategy and are unstable in the scenario of recommendation. In this article, we propose a stable reinforcement learning framework for recommendation. We redefine the Markov decision process of RL-based recommendation, and add a stable module to model high feedback behavior of users. Second, an advanced RL algorithm is introduced to ensure stability and exploratory. The experiments verify the effectiveness of the proposed algorithm.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available