Towards Optimal Power Control via Ensembling Deep Neural Networks

被引:170
|
作者
Liang, Fei [1 ,2 ]
Shen, Cong [3 ]
Yu, Wei [4 ]
Wu, Feng [1 ]
机构
[1] Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230026, Peoples R China
[2] Huawei Technol Co, Shanghai 201206, Peoples R China
[3] Univ Virginia, Charles L Brown Dept Elect & Comp Engn, Charlottesville, VA 22904 USA
[4] Univ Toronto, Elect & Comp Engn Dept, Toronto, ON M5S 3G4, Canada
基金
加拿大自然科学与工程研究理事会; 中国国家自然科学基金;
关键词
Power control; interference mitigation; deep neural networks (DNN); ensemble learning; SUM RATE MAXIMIZATION; ALLOCATION; COMPLEXITY;
D O I
10.1109/TCOMM.2019.2957482
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
A deep neural network (DNN) based power control method that aims at solving the non-convex optimization problem of maximizing the sum rate of a fading multi-user interference channel is proposed. Towards this end, we first present PCNet, which is a multi-layer fully connected neural network that is specifically designed for the power control problem. A key challenge in training a DNN for the power control problem is the lack of ground truth, i.e., the optimal power allocation is unknown. To address this issue, PCNet leverages the unsupervised learning strategy and directly maximizes the sum rate in the training phase. We then present PCNet+, which enhances the generalization capacity of PCNet by incorporating noise power as an input to the network. Observing that a single PCNet(+) does not universally outperform the existing solutions, we further propose ePCNet(+), a network ensemble with multiple PCNets(+) trained independently. Simulation results show that for the standard symmetric $K$ -user Gaussian interference channel, the proposed methods can outperform state-of-the-art power control solutions under a variety of system configurations. Furthermore, the performance improvement of ePCNet comes with a reduced computational complexity.
引用
收藏
页码:1760 / 1776
页数:17
相关论文
共 50 条
  • [41] Wireless Power Control via Counterfactual Optimization of Graph Neural Networks
    Naderializadeh, Navid
    Eisen, Mark
    Ribeiro, Alejandro
    PROCEEDINGS OF THE 21ST IEEE INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATIONS (IEEE SPAWC2020), 2020,
  • [42] Towards popularity prediction of information cascades via degree distribution and deep neural networks
    Feng, Xiaodong
    Zhao, Qihang
    Zhu, RuiJie
    JOURNAL OF INFORMETRICS, 2023, 17 (03)
  • [43] Attacking Neural Networks with Neural Networks: Towards Deep Synchronization for Backdoor Attacks
    Guan, Zihan
    Sun, Lichao
    Du, Mengnan
    Liu, Ninghao
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 608 - 618
  • [44] Optimal control of batch reactor via structure approaching hybrid neural networks
    Institute of Automation, Beijing University of Chemical Technology, Beijing 100029, China
    Huagong Xuebao/Journal of Chemical Industry and Engineering (China), 2008, 59 (07): : 1848 - 1853
  • [45] Quasi-optimal control of a solar thermal system via neural networks
    Friese, Jana
    Brandt, Niklas
    Schulte, Andreas
    Kirches, Christian
    Tegethoff, Wilhelm
    Koehler, Juergen
    ENERGY AND AI, 2023, 12
  • [46] Towards optimal synchronization in power law networks
    Fan, Jin
    Wang, Xiao Fan
    Li, Xiang
    2006 CHINESE CONTROL CONFERENCE, VOLS 1-5, 2006, : 1525 - +
  • [47] Control System Response Improvement via Denoising Using Deep Neural Networks
    Fathi, Kiavash
    Mahdavi, Mehdi
    2019 IEEE 10TH ANNUAL UBIQUITOUS COMPUTING, ELECTRONICS & MOBILE COMMUNICATION CONFERENCE (UEMCON), 2019, : 377 - 382
  • [48] Universal Approximation Power of Deep Residual Neural Networks Through the Lens of Control
    Tabuada, Paulo
    Gharesifard, Bahman
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2023, 68 (05) : 2715 - 2728
  • [49] A novel ensembling method to boost performance of neural networks
    Chakraborty, Manomita
    Biswas, Saroj Kumar
    Purkayastha, Biswajit
    JOURNAL OF EXPERIMENTAL & THEORETICAL ARTIFICIAL INTELLIGENCE, 2020, 32 (01) : 17 - 29
  • [50] PDE MODELS FOR DEEP NEURAL NETWORKS: LEARNING THEORY, CALCULUS OF VARIATIONS AND OPTIMAL CONTROL
    Markowich, Peter
    Portaro, Simone
    arXiv,