Reinforcement learning-based cell selection in sparse mobile crowdsensing

被引:46
|
作者
Liu, Wenbin [1 ,4 ]
Wang, Leye [2 ,3 ]
Wang, En [1 ]
Yang, Yongjian [1 ]
Zeghlache, Djamal [4 ]
Zhang, Daqing [2 ,3 ,4 ]
机构
[1] Jilin Univ, Coll Comp Sci & Technol, Changchun, Jilin, Peoples R China
[2] Peking Univ, Key Lab High Confidence Software Technol, Beijing, Peoples R China
[3] Peking Univ, Sch Elect Engn & Comp Sci, Beijing, Peoples R China
[4] Telecom SudParis, RS2M, Evry, France
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Mobile crowdsensing; Cell selection; Reinforcement learning; Compressive sensing; GAME; GO;
D O I
10.1016/j.comnet.2019.06.010
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Sparse Mobile Crowdsensing (MCS) is a novel MCS paradigm which allows us to use the mobile devices to collect sensing data from only a small subset of cells (sub-areas) in the target sensing area while intelligently inferring the data of other cells with quality guarantee. Since selecting sensed data from different cell sets will probably lead to diverse levels of inference data quality, cell selection (i.e., choosing which cells in the target area to collect sensed data from participants) is a critical issue that will impact the total amount of data that requires to be collected (i.e., data collection costs) for ensuring a certain level of data quality. To address this issue, this paper proposes the reinforcement learning-based cell selection algorithm for Sparse MCS. First, we model the key concepts in reinforcement learning including state, action, and reward, and then propose a Q-learning based cell selection algorithm. To deal with the large state space, we employ the deep Q-network to learn the Q-function that can help decide which cell is a better choice under a certain state during cell selection. Then, we modify the Q-network to a deep recurrent Q-network with LSTM to catch the temporal patterns and handle partial observability. Furthermore, we leverage the transfer learning techniques to relieve the dependency on a large amount of training data. Experiments on various real-life sensing datasets verify the effectiveness of our proposed algorithms over the state-of-the-art mechanisms in Sparse MCS by reducing up to 20% of sensed cells with the same data inference quality guarantee. (C) 2019 Published by Elsevier B.V.
引用
收藏
页码:102 / 114
页数:13
相关论文
共 50 条
  • [21] Mobile Crowdsensing for Data Freshness: A Deep Reinforcement Learning Approach
    Dai, Zipeng
    Wang, Hao
    Liu, Chi Harold
    Han, Rui
    Tang, Jian
    Wang, Guoren
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2021), 2021,
  • [22] Three-way decision based participants selection optimization model in sparse mobile crowdsensing
    Wang, Jian
    Zhao, Guosheng
    Ge, Huijie
    INFORMATION SCIENCES, 2023, 645
  • [23] Privacy-Aware Task Allocation Based on Deep Reinforcement Learning for Mobile Crowdsensing
    Yang, Mingchuan
    Zhu, Jinghua
    Xi, Heran
    Yang, Yue
    WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS, PT III, 2022, 13473 : 191 - 201
  • [24] Intelligent Offloading in Blockchain-Based Mobile Crowdsensing Using Deep Reinforcement Learning
    Chen, Zheyi
    Yu, Zhengxin
    IEEE COMMUNICATIONS MAGAZINE, 2023, 61 (06) : 118 - 123
  • [25] Sparse Kernel Learning-Based Feature Selection for Anomaly Detection
    Peng, Zhimin
    Gurram, Prudhvi
    Kwon, Heesung
    Yin, Wotao
    IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, 2015, 51 (03) : 1698 - 1716
  • [26] Reinforcement Learning-Based Tracking Control For Wheeled Mobile Robot
    Nguyen Tan Luy
    PROCEEDINGS 2012 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2012, : 462 - 467
  • [27] Deep Reinforcement Learning-Based Method of Mobile Data Offloading
    Mochizuki, Daisuke
    Abiko, Yu
    Mineno, Hiroshi
    Saito, Takato
    Ikeda, Daizo
    Katagiri, Masaji
    2018 ELEVENTH INTERNATIONAL CONFERENCE ON MOBILE COMPUTING AND UBIQUITOUS NETWORK (ICMU 2018), 2018,
  • [28] RLC: A Reinforcement Learning-Based Charging Algorithm for Mobile Devices
    Liu, Tang
    Wu, Baijun
    Xu, Wenzheng
    Cao, Xianbo
    Peng, Jian
    Wu, Hongyi
    ACM TRANSACTIONS ON SENSOR NETWORKS, 2021, 17 (04)
  • [29] Practical considerations in reinforcement learning-based MPC for mobile robots
    Busetto, Riccardo
    Breschi, Valentina
    Vaccari, Giulio
    Formentin, Simone
    IFAC PAPERSONLINE, 2023, 56 (02): : 5787 - 5792
  • [30] Reinforcement learning-based power control in mobile communications systems
    Gao, XZ
    Ovaska, SJ
    Vasilakos, AV
    INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2002, 8 (04): : 337 - 352