Multi-User Mm Wave Beam Tracking via Multi-Agent Deep Q-Learning

被引:1
|
作者
MENG Fan [1 ]
HUANG Yongming [2 ]
LU Zhaohua [3 ]
XIAO Huahua [3 ]
机构
[1] Purple Mountain Laboratories
[2] School of Information Science and Engineering, Southeast University
[3] State Key Laboratory of Mobile Network and Mobile Multimedia Technology, ZTE Corporation
关键词
D O I
暂无
中图分类号
TN929.5 [移动通信]; TP18 [人工智能理论];
学科分类号
080402 ; 080904 ; 0810 ; 081001 ; 081104 ; 0812 ; 0835 ; 1405 ;
摘要
Beamforming is significant for millimeter wave multi-user massive multi-input multi-output systems.In the meanwhile,the overhead cost of channel state information and beam training is considerable,especially in dynamic environments.To reduce the overhead cost,we propose a multi-user beam tracking algorithm using a distributed deep Q-learning method.With online learning of users’ moving trajectories,the proposed algorithm learns to scan a beam subspace to maximize the average effective sum rate.Considering practical implementation,we model the continuous beam tracking problem as a non-Markov decision process and thus develop a simplified training scheme of deep Q-learning to reduce the training complexity.Furthermore,we propose a scalable state-action-reward design for scenarios with different users and antenna numbers.Simulation results verify the effectiveness of the designed method.
引用
收藏
页码:53 / 60
页数:8
相关论文
共 50 条
  • [1] Resource Allocation for Multi-user Cognitive Radio Systems using Multi-agent Q-Learning
    Azzouna, Ahmed
    Guezmil, Amel
    Sakly, Anis
    Mtibaa, Abdellatif
    [J]. ANT 2012 AND MOBIWIS 2012, 2012, 10 : 46 - 53
  • [2] Regularized Softmax Deep Multi-Agent Q-Learning
    Pan, Ling
    Rashid, Tabish
    Peng, Bei
    Huang, Longbo
    Whiteson, Shimon
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [3] Multi-agent Q-Learning of Channel Selection in Multi-user Cognitive Radio Systems: A Two by Two Case
    Li, Husheng
    [J]. 2009 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2009), VOLS 1-9, 2009, : 1893 - 1898
  • [4] Multi-Agent Advisor Q-Learning
    Subramanian, Sriram Ganapathi
    Taylor, Matthew E.
    Larson, Kate
    Crowley, Mark
    [J]. Journal of Artificial Intelligence Research, 2022, 74 : 1 - 74
  • [5] Q-learning in Multi-Agent Cooperation
    Hwang, Kao-Shing
    Chen, Yu-Jen
    Lin, Tzung-Feng
    [J]. 2008 IEEE WORKSHOP ON ADVANCED ROBOTICS AND ITS SOCIAL IMPACTS, 2008, : 239 - 244
  • [6] Multi-Agent Advisor Q-Learning
    Subramanian, Sriram Ganapathi
    Taylor, Matthew E.
    Larson, Kate
    Crowley, Mark
    [J]. PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 6884 - 6889
  • [7] Multi-Agent Advisor Q-Learning
    Subramanian, Sriram Ganapathi
    Taylor, Matthew E.
    Larson, Kate
    Crowley, Mark
    [J]. JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2022, 74 : 1 - 74
  • [8] Modular Production Control with Multi-Agent Deep Q-Learning
    Gankin, Dennis
    Mayer, Sebastian
    Zinn, Jonas
    Vogel-Heuser, Birgit
    Endisch, Christian
    [J]. 2021 26TH IEEE INTERNATIONAL CONFERENCE ON EMERGING TECHNOLOGIES AND FACTORY AUTOMATION (ETFA), 2021,
  • [9] Distributed Multi-Agent Deep Q-Learning for Load Balancing User Association in Dense Networks
    Lim, Byungju
    Vu, Mai
    [J]. IEEE WIRELESS COMMUNICATIONS LETTERS, 2023, 12 (07) : 1120 - 1124
  • [10] A novel multi-agent Q-learning algorithm in cooperative multi-agent system
    Ou, HT
    Zhang, WD
    Zhang, WY
    Xu, XM
    [J]. PROCEEDINGS OF THE 3RD WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION, VOLS 1-5, 2000, : 272 - 276