Reinforcement learning-based cell selection in sparse mobile crowdsensing

被引:46
|
作者
Liu, Wenbin [1 ,4 ]
Wang, Leye [2 ,3 ]
Wang, En [1 ]
Yang, Yongjian [1 ]
Zeghlache, Djamal [4 ]
Zhang, Daqing [2 ,3 ,4 ]
机构
[1] Jilin Univ, Coll Comp Sci & Technol, Changchun, Jilin, Peoples R China
[2] Peking Univ, Key Lab High Confidence Software Technol, Beijing, Peoples R China
[3] Peking Univ, Sch Elect Engn & Comp Sci, Beijing, Peoples R China
[4] Telecom SudParis, RS2M, Evry, France
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Mobile crowdsensing; Cell selection; Reinforcement learning; Compressive sensing; GAME; GO;
D O I
10.1016/j.comnet.2019.06.010
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Sparse Mobile Crowdsensing (MCS) is a novel MCS paradigm which allows us to use the mobile devices to collect sensing data from only a small subset of cells (sub-areas) in the target sensing area while intelligently inferring the data of other cells with quality guarantee. Since selecting sensed data from different cell sets will probably lead to diverse levels of inference data quality, cell selection (i.e., choosing which cells in the target area to collect sensed data from participants) is a critical issue that will impact the total amount of data that requires to be collected (i.e., data collection costs) for ensuring a certain level of data quality. To address this issue, this paper proposes the reinforcement learning-based cell selection algorithm for Sparse MCS. First, we model the key concepts in reinforcement learning including state, action, and reward, and then propose a Q-learning based cell selection algorithm. To deal with the large state space, we employ the deep Q-network to learn the Q-function that can help decide which cell is a better choice under a certain state during cell selection. Then, we modify the Q-network to a deep recurrent Q-network with LSTM to catch the temporal patterns and handle partial observability. Furthermore, we leverage the transfer learning techniques to relieve the dependency on a large amount of training data. Experiments on various real-life sensing datasets verify the effectiveness of our proposed algorithms over the state-of-the-art mechanisms in Sparse MCS by reducing up to 20% of sensed cells with the same data inference quality guarantee. (C) 2019 Published by Elsevier B.V.
引用
收藏
页码:102 / 114
页数:13
相关论文
共 50 条
  • [11] Reinforcement Learning-Based Adaptive Operator Selection
    Durgut, Rafet
    Aydin, Mehmet Emin
    OPTIMIZATION AND LEARNING, OLA 2021, 2021, 1443 : 29 - 41
  • [12] DEEP LEARNING-BASED DETECTION OF FAKE TASK INJECTION IN MOBILE CROWDSENSING
    Sood, Ankkita
    Simsek, Murat
    Zhang, Yueqian
    Kantarci, Burak
    2019 7TH IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (IEEE GLOBALSIP), 2019,
  • [13] A machine learning-based framework for user recruitment in continuous mobile crowdsensing
    Nasser, Ruba
    Aboulhosn, Zeina
    Mizouni, Rabeb
    Singh, Shakti
    Otrok, Hadi
    AD HOC NETWORKS, 2023, 145
  • [14] Robust Data Inference and Cost-Effective Cell Selection for Sparse Mobile Crowdsensing
    Li, Chengxin
    Li, Zhetao
    Long, Saiqin
    Qiao, Pengpeng
    Yuan, Ye
    Wang, Guoren
    IEEE-ACM TRANSACTIONS ON NETWORKING, 2024, : 1 - 16
  • [15] A Reinforcement Learning-Based Incentive Mechanism for Task Allocation Under Spatiotemporal Crowdsensing
    Jiang, Kaige
    Wang, Yingjie
    Wang, Haipeng
    Liu, Zhaowei
    Han, Qilong
    Zhou, Ao
    Xiang, Chaocan
    Cai, Zhipeng
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, 11 (02) : 2179 - 2189
  • [16] Task Allocation for Mobile Crowdsensing with Deep Reinforcement Learning
    Tao, Xi
    Song, Wei
    2020 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2020,
  • [17] A Secure Mobile Crowdsensing Game With Deep Reinforcement Learning
    Xiao, Liang
    Li, Yanda
    Han, Guoan
    Dai, Huaiyu
    Poor, H. Vincent
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2018, 13 (01) : 35 - 47
  • [18] Sparse reward for reinforcement learning-based continuous integration testing
    Yang, Yang
    Li, Zheng
    Shang, Ying
    Li, Qianyu
    JOURNAL OF SOFTWARE-EVOLUTION AND PROCESS, 2023, 35 (06)
  • [19] Deep Reinforcement Learning-Based Defense Strategy Selection
    Charpentier, Axel
    Boulahia-Cuppens, Nora
    Cuppens, Frederic
    Yaich, Reda
    PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON AVAILABILITY, RELIABILITY AND SECURITY, ARES 2022, 2022,
  • [20] Federated Deep Reinforcement Learning for Task Participation in Mobile Crowdsensing
    Dongare, Sumedh
    Ortiz, Andrea
    Klein, Anja
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 4436 - 4441