Decentralized Task Assignment for Mobile Crowdsensing With Multi-Agent Deep Reinforcement Learning

被引:8
|
作者
Xu, Chenghao [1 ]
Song, Wei [1 ]
机构
[1] Univ New Brunswick, Fac Comp Sci, Fredericton, NB E3B 5A3, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Task analysis; Sensors; Resource management; Privacy; Metaheuristics; Costs; Routing; Graph embedding; learning-communication; mobile crowdsensing (MCS); multi-agent deep reinforcement learning (DRL); QMIX; task assignment; ALLOCATION;
D O I
10.1109/JIOT.2023.3268846
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Task assignment is a fundamental research problem in mobile crowdsensing (MCS) since it directly determines an MCS system's practicality and economic value. Due to the complex dynamics of tasks and workers, task assignment problems are usually NP-hard, and approximation-based methods are preferred to impractical optimal methods. In the literature, a graph neural network-based deep reinforcement learning (GDRL) method is proposed in Xu and Song (2022) to solve routing problems in MCS and shows high performance and time efficiency. However, GDRL, as a centralized method, has to cope with the limitation in scalability and the challenge of privacy protection. In this article, we propose a multi-agent deep reinforcement learning-based method named communication-QMIX-based multi-agent DRL (CQDRL) to solve a task assignment problem in a decentralized fashion. The CQDRL method not only inherits the merits of GDRL over handcrafted heuristic and metaheuristic methods but also exploits computation potentials in mobile devices and protects workers' privacy with a decentralized decision-making scheme. Our extensive experiments show that the CQDRL method can achieve significantly better performance than other traditional methods and performs fairly close to the centralized GDRL method.
引用
收藏
页码:16564 / 16578
页数:15
相关论文
共 50 条
  • [1] Dynamic Task Assignment Framework for Mobile Crowdsensing with Deep Reinforcement Learning
    Fu Y.
    Qi K.
    Shi Y.
    Shen Y.
    Xu L.
    Zhang X.
    Wireless Communications and Mobile Computing, 2023, 2023
  • [2] IntelligentCrowd: Mobile Crowdsensing via Multi-Agent Reinforcement Learning
    Chen, Yize
    Wang, Hao
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2021, 5 (05): : 840 - 845
  • [3] QoI-Aware Mobile Crowdsensing for Metaverse by Multi-Agent Deep Reinforcement Learning
    Ye, Yuxiao
    Wang, Hao
    Liu, Chi Harold
    Dai, Zipeng
    Li, Guozheng
    Wang, Guoren
    Tang, Jian
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2024, 42 (03) : 783 - 798
  • [4] Deep Decentralized Multi-task Multi-Agent Reinforcement Learning under Partial Observability
    Omidshafiei, Shayegan
    Pazis, Jason
    Amato, Christopher
    How, Jonathan P.
    Vian, John
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [5] Decentralized Online Learning in Task Assignment Games for Mobile Crowdsensing
    Simon, Bernd
    Ortiz, Andrea
    Saad, Walid
    Klein, Anja
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2024, 72 (08) : 4945 - 4960
  • [6] Cooperative task assignment in spatial crowdsourcing via multi-agent deep reinforcement learning?
    Zhao, Pengcheng
    Li, Xiang
    Gao, Shang
    Wei, Xiaohui
    JOURNAL OF SYSTEMS ARCHITECTURE, 2022, 128
  • [7] Decentralized Multi-Agent Pursuit Using Deep Reinforcement Learning
    de Souza, Cristino, Jr.
    Newbury, Rhys
    Cosgun, Akansel
    Castillo, Pedro
    Vidolov, Boris
    Kulic, Dana
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (03): : 4552 - 4559
  • [8] Task Allocation for Mobile Crowdsensing with Deep Reinforcement Learning
    Tao, Xi
    Song, Wei
    2020 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2020,
  • [9] Deep Multi-Agent Reinforcement Learning for Decentralized Active Hypothesis Testing
    Szostak, Hadar
    Cohen, Kobi
    IEEE ACCESS, 2024, 12 : 130444 - 130459
  • [10] Decentralized Anomaly Detection via Deep Multi-Agent Reinforcement Learning
    Szostak, Hadar
    Cohen, Kobi
    2022 58TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2022,