ADAPTIVE CONTENTION WINDOW DESIGN USING DEEP Q-LEARNING

被引:22
|
作者
Kumar, Abhishek [1 ]
Verma, Gunjan [2 ]
Rao, Chirag [2 ]
Swami, Ananthram [2 ]
Segarra, Santiago [1 ]
机构
[1] Rice Univ, Houston, TX 77251 USA
[2] US Armys CCDC Army Res Lab, Adelphi, MD USA
关键词
Wireless network; random access; contention window; reinforcement learning; deep Q-learning; ACCESS-CONTROL;
D O I
10.1109/ICASSP39728.2021.9414805
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
We study the problem of adaptive contention window (CW) design for random-access wireless networks. More precisely, our goal is to design an intelligent node that can dynamically adapt its minimum CW (MCW) parameter to maximize a network-level utility knowing neither the MCWs of other nodes nor how these change over time. To achieve this goal, we adopt a reinforcement learning (RL) framework where we circumvent the lack of system knowledge with local channel observations and we reward actions that lead to high utilities. To efficiently learn these preferred actions, we follow a deep Q-learning approach, where the Q-value function is parametrized using a multi-layer perceptron. In particular, we implement a rainbow agent, which incorporates several empirical improvements over the basic deep Q-network. Numerical experiments based on the NS3 simulator reveal that the proposed RL agent performs close to optimal and markedly improves upon existing learning and non-learning based alternatives.
引用
收藏
页码:4950 / 4954
页数:5
相关论文
共 50 条
  • [1] Deep Reinforcement Learning: From Q-Learning to Deep Q-Learning
    Tan, Fuxiao
    Yan, Pengfei
    Guan, Xinping
    [J]. NEURAL INFORMATION PROCESSING (ICONIP 2017), PT IV, 2017, 10637 : 475 - 483
  • [2] Adaptive-Precision Framework for SGD Using Deep Q-Learning
    Zhang, Wentai
    Huang, Hanxian
    Zhang, Jiaxi
    Jiang, Ming
    Luo, Guojie
    [J]. 2018 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD) DIGEST OF TECHNICAL PAPERS, 2018,
  • [3] Adaptive Learning Recommendation Strategy Based on Deep Q-learning
    Tan, Chunxi
    Han, Ruijian
    Ye, Rougang
    Chen, Kani
    [J]. APPLIED PSYCHOLOGICAL MEASUREMENT, 2020, 44 (04) : 251 - 266
  • [4] Deep Q-Learning for Aggregator Price Design
    Pigott, Aisling
    Baker, Kyri
    Mosiman, Cory
    [J]. 2021 IEEE POWER & ENERGY SOCIETY GENERAL MEETING (PESGM), 2021,
  • [5] An Online Home Energy Management System using Q-Learning and Deep Q-Learning
    İzmitligil, Hasan
    Karamancıoğlu, Abdurrahman
    [J]. Sustainable Computing: Informatics and Systems, 2024, 43
  • [6] An adaptive deep Q-learning strategy for handwritten digit recognition
    Qiao, Junfei
    Wang, Gongming
    Li, Wenjing
    Chen, Min
    [J]. NEURAL NETWORKS, 2018, 107 : 61 - 71
  • [7] Adaptive Traffic Signal Control with Deep Recurrent Q-learning
    Zeng, Jinghong
    Hu, Jianming
    Zhang, Yi
    [J]. 2018 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2018, : 1215 - 1220
  • [8] Adaptive Bases for Q-learning
    Di Castro, Dotan
    Mannor, Shie
    [J]. 49TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2010, : 4587 - 4593
  • [9] Constrained Deep Q-Learning Gradually Approaching Ordinary Q-Learning
    Ohnishi, Shota
    Uchibe, Eiji
    Yamaguchi, Yotaro
    Nakanishi, Kosuke
    Yasui, Yuji
    Ishii, Shin
    [J]. FRONTIERS IN NEUROROBOTICS, 2019, 13
  • [10] Adaptive and Dynamic Service Composition Using Q-Learning
    Wang, Hongbing
    Zhou, Xuan
    Zhou, Xiang
    Liu, Weihong
    Li, Wenya
    [J]. 22ND INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2010), PROCEEDINGS, VOL 1, 2010,