PIANO: Influence Maximization Meets Deep Reinforcement Learning

被引:16
|
作者
Li, Hui [1 ]
Xu, Mengting [1 ]
Bhowmick, Sourav S. [2 ]
Rayhan, Joty Shafiq [2 ]
Sun, Changsheng [3 ]
Cui, Jiangtao [1 ]
机构
[1] Xidian Univ, Sch Comp Sci & Technol, Xian 710071, Peoples R China
[2] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore 639798, Singapore
[3] Natl Univ Singapore, Sch Comp, Singapore 119077, Singapore
基金
中国国家自然科学基金;
关键词
Training; Approximation algorithms; Social networking (online); Peer-to-peer computing; Task analysis; Heuristic algorithms; Computational modeling; Deep reinforcement learning (RL); graph embedding; influence maximization (IM); social network; ALGORITHMS;
D O I
10.1109/TCSS.2022.3164667
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Since its introduction in 2003, the influence maximization (IM) problem has drawn significant research attention in the literature. The aim of IM, which is NP-hard, is to select a set of k users known as seed users who can influence the most individuals in the social network. The state-of-the-art algorithms estimate the expected influence of nodes based on sampled diffusion paths. As the number of required samples has been recently proven to be lower bounded by a particular threshold that presets tradeoff between the accuracy and the efficiency, the result quality of these traditional solutions is hard to be further improved without sacrificing efficiency. In this article, we present an orthogonal and novel paradigm to address the IM problem by leveraging deep reinforcement learning (RL) to estimate the expected influence. In particular, we present a novel framework called deeP reInforcement leArning-based iNfluence maximizatiOn (PIANO) that incorporates network embedding and RL techniques to address this problem. In order to make it practical, we further present PIANO-E and PIANO@⟨angle d⟩, both of which can be applied directly to answer IM without training the model from scratch. Experimental study on real-world networks demonstrates that PIANO achieves the best performance with respect to efficiency and influence spread quality compared to state-of-the-art classical solutions. We also demonstrate that the learned parametric models generalize well across different networks. Besides, we provide a pool of pretrained PIANO models such that any IM task can be addressed by directly applying a model from the pool without training over the targeted network.
引用
收藏
页码:1288 / 1300
页数:13
相关论文
共 50 条
  • [1] DGN: influence maximization based on deep reinforcement learning
    Wang, Jingwen
    Cao, Zhoulin
    Xie, Chunzhi
    Li, Yanli
    Liu, Jia
    Gao, Zhisheng
    [J]. Journal of Supercomputing, 2025, 81 (01):
  • [2] Influence maximization in hypergraphs based on evolutionary deep reinforcement learning
    Xu, Long
    Ma, Lijia
    Lin, Qiuzhen
    Li, Lingjie
    Gong, Maoguo
    Li, Jianqiang
    [J]. Information Sciences, 2025, 698
  • [3] Balanced influence maximization in social networks based on deep reinforcement learning
    Yang S.
    Du Q.
    Zhu G.
    Cao J.
    Chen L.
    Qin W.
    Wang Y.
    [J]. Neural Networks, 2024, 169 : 334 - 351
  • [4] Influence Maximization in Complex Networks by Using Evolutionary Deep Reinforcement Learning
    Ma, Lijia
    Shao, Zengyang
    Li, Xiaocong
    Lin, Qiuzhen
    Li, Jianqiang
    Leung, Victor C. M.
    Nandi, Asoke K.
    [J]. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2023, 7 (04): : 995 - 1009
  • [5] Influence Maximization for Signed Networks Based on Evolutionary Deep Reinforcement Learning
    Ma L.-J.
    Hong H.-P.
    Lin Q.-Z.
    Li J.-Q.
    Gong M.-G.
    [J]. Ruan Jian Xue Bao/Journal of Software, 2023, 34 (11): : 5084 - 5112
  • [6] Multimedia Meets Deep Reinforcement Learning
    Chen, Shu-Ching
    [J]. IEEE MULTIMEDIA, 2022, 29 (03) : 5 - 6
  • [7] Feeling of Presence Maximization: mmWave-Enabled Virtual Reality Meets Deep Reinforcement Learning
    Yang, Peng
    Quek, Tony Q. S.
    Chen, Jingxuan
    You, Chaoqun
    Cao, Xianbin
    [J]. IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (11) : 10005 - 10019
  • [8] NEDRL-CIM:Network Embedding Meets Deep Reinforcement Learning to Tackle Competitive Influence Maximization on Evolving Social Networks
    Ali, Khurshed
    Wang, Chih-Yu
    Yeh, Mi-Yen
    Li, Cheng-Te
    Chen, Yi-Shin
    [J]. 2021 IEEE 8TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA), 2021,
  • [9] Addressing Competitive Influence Maximization on Unknown Social Network with Deep Reinforcement Learning
    Ali, Khurshed
    Wang, Chih-Yu
    Yeh, Mi-Yen
    Chen, Yi-Shin
    [J]. 2020 IEEE/ACM INTERNATIONAL CONFERENCE ON ADVANCES IN SOCIAL NETWORKS ANALYSIS AND MINING (ASONAM), 2020, : 196 - 203
  • [10] ToupleGDD: A Fine-Designed Solution of Influence Maximization by Deep Reinforcement Learning
    Chen, Tiantian
    Yan, Siwen
    Guo, Jianxiong
    Wu, Weili
    [J]. IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, 11 (02) : 2210 - 2221