Deep Q-network Based Reinforcement Learning for Distributed Dynamic Spectrum Access

被引:1
|
作者
Yadav, Manish Anand [1 ]
Li, Yuhui [1 ]
Fang, Guangjin [1 ]
Shen, Bin [1 ]
机构
[1] Chongqing Univ Posts & Telecommun CQUPT, Sch Commun & Informat Engn SCIE, Chongqing 400065, Peoples R China
关键词
dynamic spectrum access; Q-learning; deep reinforcement learning; double deep Q-network;
D O I
10.1109/CCAI55564.2022.9807797
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To solve the problem of spectrum scarcity and spectrum under-utilization in wireless networks, we propose a double deep Q-network based reinforcement learning algorithm for distributed dynamic spectrum access. Channels in the network are either busy or idle based on the two-state Markov chain. At the start of each time slot, every secondary user (SU) performs spectrum sensing on each channel and accesses one based on the sensing result as well as the output of the Q-network of our algorithm. Over time, the Deep Reinforcement Learning (DRL) algorithm learns the spectrum environment and becomes good at modeling the behavior pattern of the primary users (PUs). Through simulation, we show that our proposed algorithm is simple to train, yet effective in reducing interference to primary as well as secondary users and achieving higher successful transmission.
引用
收藏
页码:227 / 232
页数:6
相关论文
共 50 条
  • [31] Deep Q-Network with Reinforcement Learning for Fault Detection in Cyber-Physical Systems
    Jayaprakash, J. Stanly
    Priyadarsini, M. Jasmine Pemeena
    Parameshachari, B. D.
    Karimi, Hamid Reza
    Gurumoorthy, Sasikumar
    JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2022, 31 (09)
  • [32] Deep Recurrent Reinforcement Learning-Based Distributed Dynamic Spectrum Access in Multichannel Wireless Networks With Imperfect Feedback
    Kaur, Amandeep
    Thakur, Jaismin
    Thakur, Mukul
    Kumar, Krishan
    Prakash, Arun
    Tripathi, Rajeev
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2023, 9 (02) : 281 - 292
  • [33] Deep Double Q-Network Based on Linear Dynamic Frame Skip
    Chen S.
    Zhang X.-F.
    Zhang Z.-Z.
    Liu Q.
    Wu J.-J.
    Yan Y.
    Jisuanji Xuebao/Chinese Journal of Computers, 2019, 42 (11): : 2561 - 2573
  • [34] Dynamic Spectrum Access for Internet-of-Things Based on Federated Deep Reinforcement Learning
    Li, Feng
    Shen, Bowen
    Guo, Jiale
    Lam, Kwok-Yan
    Wei, Guiyi
    Wang, Li
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2022, 71 (07) : 7952 - 7956
  • [35] Distributed dynamic spectrum access through multi-agent deep recurrent Q-learning in cognitive radio network
    Giri, Manish Kumar
    Majumder, Saikat
    PHYSICAL COMMUNICATION, 2023, 58
  • [36] Dynamic Parallel Machine Scheduling With Deep Q-Network
    Liu, Chien-Liang
    Tseng, Chun-Jan
    Huang, Tzu-Hsuan
    Wang, Jhih-Wun
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2023, 53 (11): : 6792 - 6804
  • [37] Deep reinforcement learning based adaptive energy management for plug-in hybrid electric vehicle with double deep Q-network
    Shi, Dehua
    Xu, Han
    Wang, Shaohua
    Hu, Jia
    Chen, Long
    Yin, Chunfang
    ENERGY, 2024, 305
  • [38] Deep Q-Network Based Dynamic Movement Strategy in a UAV-Assisted Network
    Zhong, Xukai
    Huo, Yiming
    Dong, Xiaodai
    Liang, Zhonghua
    2020 IEEE 92ND VEHICULAR TECHNOLOGY CONFERENCE (VTC2020-FALL), 2020,
  • [39] Influence on Learning of Various Conditions in Deep Q-Network
    Niitsuma, Jun
    Osana, Yuko
    2017 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2017, : 1932 - 1935
  • [40] Intelligent Dynamic Spectrum Access Using Deep Reinforcement Learning for VANETs
    Wang, Yonghua
    Li, Xueyang
    Wan, Pin
    Shao, Ruiyu
    IEEE SENSORS JOURNAL, 2021, 21 (14) : 15554 - 15563