PIANO: Influence Maximization Meets Deep Reinforcement Learning

被引:16
|
作者
Li, Hui [1 ]
Xu, Mengting [1 ]
Bhowmick, Sourav S. [2 ]
Rayhan, Joty Shafiq [2 ]
Sun, Changsheng [3 ]
Cui, Jiangtao [1 ]
机构
[1] Xidian Univ, Sch Comp Sci & Technol, Xian 710071, Peoples R China
[2] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore 639798, Singapore
[3] Natl Univ Singapore, Sch Comp, Singapore 119077, Singapore
基金
中国国家自然科学基金;
关键词
Training; Approximation algorithms; Social networking (online); Peer-to-peer computing; Task analysis; Heuristic algorithms; Computational modeling; Deep reinforcement learning (RL); graph embedding; influence maximization (IM); social network; ALGORITHMS;
D O I
10.1109/TCSS.2022.3164667
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Since its introduction in 2003, the influence maximization (IM) problem has drawn significant research attention in the literature. The aim of IM, which is NP-hard, is to select a set of k users known as seed users who can influence the most individuals in the social network. The state-of-the-art algorithms estimate the expected influence of nodes based on sampled diffusion paths. As the number of required samples has been recently proven to be lower bounded by a particular threshold that presets tradeoff between the accuracy and the efficiency, the result quality of these traditional solutions is hard to be further improved without sacrificing efficiency. In this article, we present an orthogonal and novel paradigm to address the IM problem by leveraging deep reinforcement learning (RL) to estimate the expected influence. In particular, we present a novel framework called deeP reInforcement leArning-based iNfluence maximizatiOn (PIANO) that incorporates network embedding and RL techniques to address this problem. In order to make it practical, we further present PIANO-E and PIANO@⟨angle d⟩, both of which can be applied directly to answer IM without training the model from scratch. Experimental study on real-world networks demonstrates that PIANO achieves the best performance with respect to efficiency and influence spread quality compared to state-of-the-art classical solutions. We also demonstrate that the learned parametric models generalize well across different networks. Besides, we provide a pool of pretrained PIANO models such that any IM task can be addressed by directly applying a model from the pool without training over the targeted network.
引用
收藏
页码:1288 / 1300
页数:13
相关论文
共 50 条
  • [21] SFC Embedding Meets Machine Learning: Deep Reinforcement Learning Approaches
    Liu, Yicen
    Lu, Yu
    Li, Xi
    Qiao, Wenxin
    Li, Zhiwei
    Zhao, Donghao
    [J]. IEEE COMMUNICATIONS LETTERS, 2021, 25 (06) : 1926 - 1930
  • [22] Cognitive Radio Network Throughput Maximization with Deep Reinforcement Learning
    Ong, Kevin Shen Hoong
    Zhang, Yang
    Niyato, Dusit
    [J]. 2019 IEEE 90TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2019-FALL), 2019,
  • [23] Leveraging transfer learning in reinforcement learning to tackle competitive influence maximization
    Ali, Khurshed
    Wang, Chih-Yu
    Chen, Yi-Shin
    [J]. KNOWLEDGE AND INFORMATION SYSTEMS, 2022, 64 (08) : 2059 - 2090
  • [24] Leveraging transfer learning in reinforcement learning to tackle competitive influence maximization
    Khurshed Ali
    Chih-Yu Wang
    Yi-Shin Chen
    [J]. Knowledge and Information Systems, 2022, 64 : 2059 - 2090
  • [25] Decentralized Deep Reinforcement Learning Meets Mobility Load Balancing
    Chang, Hao-Hsuan
    Chen, Hao
    Zhang, Jianzhong
    Liu, Lingjia
    [J]. IEEE-ACM TRANSACTIONS ON NETWORKING, 2023, 31 (02) : 473 - 484
  • [26] Cooperative Spectrum Sensing Meets Machine Learning: Deep Reinforcement Learning Approach
    Sarikhani, Rahil
    Keynia, Farshid
    [J]. IEEE COMMUNICATIONS LETTERS, 2020, 24 (07) : 1459 - 1462
  • [27] An investigation on the application of deep reinforcement learning in piano playing technique training
    Ji, Chen
    Wang, Dan
    Wang, Huan
    [J]. Applied Mathematics and Nonlinear Sciences, 2024, 9 (01)
  • [28] Dynamic Network Reconfiguration for Entropy Maximization using Deep Reinforcement Learning
    Doorman, Christoffel
    Darvariu, Victor-Alexandru
    Hailes, Stephen
    Musolesi, Mirco
    [J]. LEARNING ON GRAPHS CONFERENCE, VOL 198, 2022, 198
  • [29] Link-Level Throughput Maximization Using Deep Reinforcement Learning
    [J]. Pourahmadi, Vahid (v.pourahmadi@aut.ac.ir), 1600, Institute of Electrical and Electronics Engineers Inc., United States (02):
  • [30] Piano harmony automatic adaptation system based on deep reinforcement learning
    Guo, Hui
    [J]. ENTERTAINMENT COMPUTING, 2025, 52