Deep Q-network Based Reinforcement Learning for Distributed Dynamic Spectrum Access

被引:1
|
作者
Yadav, Manish Anand [1 ]
Li, Yuhui [1 ]
Fang, Guangjin [1 ]
Shen, Bin [1 ]
机构
[1] Chongqing Univ Posts & Telecommun CQUPT, Sch Commun & Informat Engn SCIE, Chongqing 400065, Peoples R China
关键词
dynamic spectrum access; Q-learning; deep reinforcement learning; double deep Q-network;
D O I
10.1109/CCAI55564.2022.9807797
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To solve the problem of spectrum scarcity and spectrum under-utilization in wireless networks, we propose a double deep Q-network based reinforcement learning algorithm for distributed dynamic spectrum access. Channels in the network are either busy or idle based on the two-state Markov chain. At the start of each time slot, every secondary user (SU) performs spectrum sensing on each channel and accesses one based on the sensing result as well as the output of the Q-network of our algorithm. Over time, the Deep Reinforcement Learning (DRL) algorithm learns the spectrum environment and becomes good at modeling the behavior pattern of the primary users (PUs). Through simulation, we show that our proposed algorithm is simple to train, yet effective in reducing interference to primary as well as secondary users and achieving higher successful transmission.
引用
收藏
页码:227 / 232
页数:6
相关论文
共 50 条
  • [1] Dynamic spectrum access based on double deep Q-network and convolution neural network
    Fang, Guangjin
    Shen, Bin
    Zhang, Hong
    Cui, Taiping
    [J]. 2022 24TH INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION TECHNOLOGY (ICACT): ARITIFLCIAL INTELLIGENCE TECHNOLOGIES TOWARD CYBERSECURITY, 2022, : 112 - +
  • [2] Deep Q-Network Based Power Allocation Meets Reservoir Computing in Distributed Dynamic Spectrum Access Networks
    Song, Hao
    Liu, Lingjia
    Chang, Hao-Hsuan
    Ashdown, Jonathan
    Yi, Yang
    [J]. IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (IEEE INFOCOM 2019 WKSHPS), 2019, : 774 - 779
  • [3] Deep Q-Network Based Dynamic Spectrum Access for Cognitive Networks with Limited Spectrum Sensing Capability SUs
    Miao, Benjing
    Pan, Zhiwen
    Wang, Bin
    Zhang, Yu
    Liu, Nan
    [J]. 2022 11TH INTERNATIONAL CONFERENCE ON COMMUNICATIONS, CIRCUITS AND SYSTEMS (ICCCAS 2022), 2022, : 176 - 181
  • [4] Multi-User Dynamic Spectrum Access Based on LR-Q Deep Reinforcement Learning Network
    Li, Yuhui
    Wang, Yu
    Li, Yue
    Shen, Bin
    [J]. 2023 25TH INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION TECHNOLOGY, ICACT, 2023, : 79 - 84
  • [5] Distributed Deep Reinforcement Learning with Wideband Sensing for Dynamic Spectrum Access
    Kaytaz, Umuralp
    Ucar, Seyhan
    Akgun, Bans
    Coleri, Sinem
    [J]. 2020 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2020,
  • [6] A Deep Reinforcement Learning Approach to Fair Distributed Dynamic Spectrum Access
    Jalil, Syed Qaisar
    Rehmani, Mubashir Husain
    Chalup, Stephan
    [J]. PROCEEDINGS OF THE 17TH EAI INTERNATIONAL CONFERENCE ON MOBILE AND UBIQUITOUS SYSTEMS: COMPUTING, NETWORKING AND SERVICES (MOBIQUITOUS 2020), 2021, : 236 - 244
  • [7] Deep Reinforcement Learning. Case Study: Deep Q-Network
    Vrejoiu, Mihnea Horia
    [J]. ROMANIAN JOURNAL OF INFORMATION TECHNOLOGY AND AUTOMATIC CONTROL-REVISTA ROMANA DE INFORMATICA SI AUTOMATICA, 2019, 29 (03): : 65 - 78
  • [8] Deep Reinforcement Learning Pairs Trading with a Double Deep Q-Network
    Brim, Andrew
    [J]. 2020 10TH ANNUAL COMPUTING AND COMMUNICATION WORKSHOP AND CONFERENCE (CCWC), 2020, : 222 - 227
  • [9] A Deep Q-Network Reinforcement Learning-Based Model for Autonomous Driving
    Ahmed, Marwa
    Lim, Chee Peng
    Nahavandi, Saeid
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2021, : 739 - 744
  • [10] Tuning Apex DQN: A Reinforcement Learning based Deep Q-Network Algorithm
    Ruhela, Dhani
    Ruhela, Amit
    [J]. PRACTICE AND EXPERIENCE IN ADVANCED RESEARCH COMPUTING 2024, PEARC 2024, 2024,