4.7 Article

A Stable Deep Reinforcement Learning Framework for Recommendation

期刊

IEEE INTELLIGENT SYSTEMS
卷 37, 期 3, 页码 76-84

出版社

IEEE COMPUTER SOC
DOI: 10.1109/MIS.2022.3145503

关键词

Reinforcement learning; Data models; Intelligent systems; Training data; Entropy; Stability analysis; Optimization

资金

  1. Provincial Natural Science Foundation of Shaanxi of China [2019JZ-26]
  2. National Natural Science Foundation of China [61876141, 61373111]

向作者/读者索取更多资源

This article proposes a stable reinforcement learning framework for recommendation systems, addressing previous research limitations by redefining the Markov decision process of RL and introducing a stable module. The experiments confirm the effectiveness of the proposed algorithm.
Recommender system (RS) solves the problem of information overload, which is crucial in industrial fields. Recently, reinforcement learning (RL) combined with RS has attracted researchers' attention. These new methods model the interaction between RS and users as a process of serialization decision-making. However, these studies suffer from several disadvantages: 1) they fail to model the accumulated long-term interest tied to high reward, and 2) these algorithms need a lot of interactive data to learn a good strategy and are unstable in the scenario of recommendation. In this article, we propose a stable reinforcement learning framework for recommendation. We redefine the Markov decision process of RL-based recommendation, and add a stable module to model high feedback behavior of users. Second, an advanced RL algorithm is introduced to ensure stability and exploratory. The experiments verify the effectiveness of the proposed algorithm.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据