Deep Learning-Based Energy Efficient Resource Allocation for Underlay Cognitive MISO Interference Channels

被引:4
|
作者
Lee, Woongsup [1 ,2 ]
Lee, Kisong [3 ]
机构
[1] Gyeongsang Natl Univ, Inst Marine Sci, Dept Informat & Commun Engn, Tongyoung 53064, South Korea
[2] Yonsei Univ, Grad Sch Informat, Seoul 03722, South Korea
[3] Dongguk Univ, Dept Informat & Commun Engn, Seoul 04620, South Korea
基金
新加坡国家研究基金会;
关键词
Deep learning; underlay cognitive radio network; multiple-input-single-output; resource allocation; energy efficiency; beamforming; POWER ALLOCATION; RADIO NETWORKS; TRANSMIT POWER; USER SELECTION; MIMO UNDERLAY; OPTIMIZATION; COMPLEXITY; DOWNLINK; QOS; FRAMEWORK;
D O I
10.1109/TCCN.2022.3222847
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
In this paper, we investigate a deep learning (DL)-based resource allocation strategy for an underlay cognitive radio network with multiple-input-single-output interference channels. The beamforming vector and transmit power of secondary users (SUs) are optimized to maximize the sum energy efficiency (EE) of the SUs whilst maintaining quality-of-service for all transmissions, i.e., regulating the interference caused at the primary user to less than a given threshold whilst guaranteeing a minimum requirement for the spectral efficiency of each SU. To this end, a novel DL framework is proposed in which the resource allocation strategy is approximated by a well designed deep neural network (DNN) model consisting of three DNN units. Moreover, an efficient training methodology is devised, where the DNN model is initialized using a suboptimal solution produced by a low complexity algorithm, and unsupervised learning-based main training is followed for fine tuning. Through extensive simulations, we confirm that our training methodology with an initialization enables the collection of large amounts of labeled training data within a short preparation time, thereby improving the training performance of the proposed DNN model with a reduced training overhead. Moreover, our results show that the proposed DL-based resource allocation can achieve near-optimal EE, i.e., 95.8% of that of the optimal scheme, with a low computation time of less than 20 milliseconds, which underlines the benefit of the proposed scheme.
引用
收藏
页码:695 / 707
页数:13
相关论文
共 50 条
  • [41] Energy-efficient resource allocation for bidirectional wireless power and information transfer over interference channels
    Lee, Kisong
    Choi, Hyun-Ho
    JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2024, 227
  • [42] Deep Reinforcement Learning Based Resource Allocation for D2D Communications Underlay Cellular Networks
    Yu, Seoyoung
    Lee, Jeong Woo
    SENSORS, 2022, 22 (23)
  • [43] DEEP LEARNING-BASED CROSS-LAYER RESOURCE ALLOCATION FOR WIRED COMMUNICATION SYSTEMS
    Behmandpoor, Pourya
    Verdyck, Jeroen
    Moonen, Marc
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 4120 - 4124
  • [44] Deep reinforcement learning-based resource allocation in multi-access edge computing
    Khani, Mohsen
    Sadr, Mohammad Mohsen
    Jamali, Shahram
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2023,
  • [45] Deep Learning-Based Resource Allocation for 5G Broadband TV Service
    Yu, Peng
    Zhou, Fanqin
    Zhang, Xiang
    Qiu, Xuesong
    Kadoch, Michel
    Cheriet, Mohamed
    IEEE TRANSACTIONS ON BROADCASTING, 2020, 66 (04) : 800 - 813
  • [46] A Deep Reinforcement Learning-Based Framework for Dynamic Resource Allocation in Multibeam Satellite Systems
    Hu, Xin
    Liu, Shuaijun
    Chen, Rong
    Wang, Weidong
    Wang, Chunting
    IEEE COMMUNICATIONS LETTERS, 2018, 22 (08) : 1612 - 1615
  • [47] Deep Q-learning based optimal resource allocation method for energy harvested cognitive radio networks
    Giri, Manish Kumar
    Majumder, Saikat
    PHYSICAL COMMUNICATION, 2022, 53
  • [48] Deep Reinforcement Learning-Based Service-Oriented Resource Allocation in Smart Grids
    Xi, Linhan
    Wang, Ying
    Wang, Yang
    Wang, Zhihui
    Wang, Xue
    Chen, Yuanbin
    IEEE ACCESS, 2021, 9 (09): : 77637 - 77648
  • [49] Poster Abstract: Deep Reinforcement Learning-based Resource Allocation in Vehicular Fog Computing
    Lee, Seung-seob
    Lee, Sukyoung
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (IEEE INFOCOM 2019 WKSHPS), 2019, : 1029 - 1030
  • [50] Energy-Efficient Resource Allocation in Uplink NOMA Systems with Deep Reinforcement Learning
    Zhang, Yuhan
    Wang, Xiaoming
    Xu, Youyun
    2019 11TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING (WCSP), 2019,