A dynamic spectrum access algorithm based on deep reinforcement learning with novel multi-vehicle reward functions in cognitive vehicular networks

被引:0
|
作者
Chen, Lingling [1 ,2 ]
Wang, Ziwei [1 ]
Zhao, Xiaohui [3 ]
Shen, Xuan [1 ]
He, Wei [1 ]
机构
[1] Jilin Inst Chem Technol, Coll Informat & Control Engn, Jilin 132000, Peoples R China
[2] Jilin Univ, Coll Commun Engn, Changchun 130012, Peoples R China
[3] Jilin Univ, Coll Commun Engn, Key Lab Informat Sci, Changchun 130012, Peoples R China
基金
中国国家自然科学基金;
关键词
Spectrum access; Cognitive vehicular networks; Deep reinforcement learning (DRL); Quality of service (QoS); ALLOCATION; POWER;
D O I
10.1007/s11235-024-01188-5
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
As a revolution in the field of transportation, the demand for communication of vehicles is increasing. Therefore, how to improve the success rate of vehicle spectrum access has become a major problem to be solved. The case of a single vehicle accessing a channel was only considered in the previous research on dynamic spectrum access in cognitive vehicular networks (CVNs), and the spectrum resources could not be fully utilized. In order to fully utilize spectrum resources, a model for spectrum sharing among multiple secondary vehicles (SVs) and a primary vehicle (PV) is proposed. This model includes scenarios where multiple SVs share spectrum to maximize the average quality of service (QoS) for vehicles. And the condition is considered that the total interference generated by vehicles accessing the same channel is less than the interference threshold. In this paper, a deep Q-network method with a modified reward function (IDQN) algorithm is proposed to maximize the average QoS of PVs and SVs and improve spectrum utilization. The algorithm is designed with different reward functions according to the QoS of PVs and SVs under different situations. Finally, the proposed algorithm is compared with the deep Q-network (DQN) and Q-learning algorithms under the Python simulation platform. The average access success rate of SVs in the IDQN algorithm proposed can reach 98%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document}, which is improved by 18%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document} compared with the Q-learning algorithm. And the convergence speed is 62.5%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document} faster than the DQN algorithm. At the same time, the average QoS of PVs and the average QoS of SVs in the IDQN algorithm can reach 2.4, which is improved by 50%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document} and 33%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document} compared with the DQN algorithm, and improved by 60%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document} and 140%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document} compared with the Q-learning algorithm.
引用
收藏
页码:359 / 383
页数:25
相关论文
共 50 条
  • [1] Multi-user reinforcement learning based multi-reward for spectrum access in cognitive vehicular networks
    Chen, Lingling
    Zhao, Quanjun
    Fu, Ke
    Zhao, Xiaohui
    Sun, Hongliang
    [J]. TELECOMMUNICATION SYSTEMS, 2023, 83 (01) : 51 - 65
  • [2] Multi-user reinforcement learning based multi-reward for spectrum access in cognitive vehicular networks
    Lingling Chen
    Quanjun Zhao
    Ke Fu
    Xiaohui Zhao
    Hongliang Sun
    [J]. Telecommunication Systems, 2023, 83 : 51 - 65
  • [3] A multi-channel and multi-user dynamic spectrum access algorithm based on deep reinforcement learning in Cognitive Vehicular Networks with sensing error
    Chen, Lingling
    Fu, Ke
    Zhao, Quanjun
    Zhao, Xiaohui
    [J]. PHYSICAL COMMUNICATION, 2022, 55
  • [4] Reinforcement Learning Based Auction Algorithm for Dynamic Spectrum Access in Cognitive Radio Networks
    Teng, Yinglei
    Zhang, Yong
    Niu, Fang
    Dai, Chao
    Song, Mei
    [J]. 2010 IEEE 72ND VEHICULAR TECHNOLOGY CONFERENCE FALL, 2010,
  • [5] A Novel Dynamic Spectrum Access Framework Based on Reinforcement Learning for Cognitive Radio Sensor Networks
    Lin, Yun
    Wang, Chao
    Wang, Jiaxing
    Dou, Zheng
    [J]. SENSORS, 2016, 16 (10)
  • [6] Dynamic spectrum access based on deep reinforcement learning for multiple access in cognitive radio
    Li, Zeng-qi
    Liu, Xin
    Ning, Zhao-long
    [J]. PHYSICAL COMMUNICATION, 2022, 54
  • [7] Deep Reinforcement Learning for Dynamic Spectrum Access in Wireless Networks
    Xu, Y.
    Yu, J.
    Headley, W. C.
    Buehrer, R. M.
    [J]. 2018 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2018), 2018, : 207 - 212
  • [8] Dynamic Spectrum Access in Cognitive Radio Networks Using Deep Reinforcement Learning and Evolutionary Game
    Yang, Peitong
    Li, Lixin
    Yin, Haying
    Zhang, Huisheng
    Liang, Wei
    Chen, Wei
    Han, Zhu
    [J]. 2018 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA (ICCC), 2018, : 405 - 409
  • [9] Deep Multi-User Reinforcement Learning for Dynamic Spectrum Access in Multichannel Wireless Networks
    Naparstek, Oshri
    Cohen, Kobi
    [J]. GLOBECOM 2017 - 2017 IEEE GLOBAL COMMUNICATIONS CONFERENCE, 2017,
  • [10] A Novel Dynamic Spectrum Access Algorithm for Cognitive Radio Networks
    Zhao, Ming
    Yin, Chang-chuan
    Wang, Xiao-jun
    [J]. JOURNAL OF COMMUNICATIONS AND NETWORKS, 2013, 15 (01) : 38 - 44