4.6 Article

PIANO: Influence Maximization Meets Deep Reinforcement Learning

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSS.2022.3164667

关键词

Training; Approximation algorithms; Social networking (online); Peer-to-peer computing; Task analysis; Heuristic algorithms; Computational modeling; Deep reinforcement learning (RL); graph embedding; influence maximization (IM); social network

向作者/读者索取更多资源

This article presents a novel approach called PIANO, which leverages deep reinforcement learning to address the influence maximization problem. By incorporating network embedding and RL techniques, PIANO achieves superior performance compared to traditional solutions, as demonstrated through experimental studies on real-world networks.
Since its introduction in 2003, the influence maximization (IM) problem has drawn significant research attention in the literature. The aim of IM, which is NP-hard, is to select a set of k users known as seed users who can influence the most individuals in the social network. The state-of-the-art algorithms estimate the expected influence of nodes based on sampled diffusion paths. As the number of required samples has been recently proven to be lower bounded by a particular threshold that presets tradeoff between the accuracy and the efficiency, the result quality of these traditional solutions is hard to be further improved without sacrificing efficiency. In this article, we present an orthogonal and novel paradigm to address the IM problem by leveraging deep reinforcement learning (RL) to estimate the expected influence. In particular, we present a novel framework called deeP reInforcement leArning-based iNfluence maximizatiOn (PIANO) that incorporates network embedding and RL techniques to address this problem. In order to make it practical, we further present PIANO-E and PIANO@⟨angle d⟩, both of which can be applied directly to answer IM without training the model from scratch. Experimental study on real-world networks demonstrates that PIANO achieves the best performance with respect to efficiency and influence spread quality compared to state-of-the-art classical solutions. We also demonstrate that the learned parametric models generalize well across different networks. Besides, we provide a pool of pretrained PIANO models such that any IM task can be addressed by directly applying a model from the pool without training over the targeted network.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据