Towards Optimal Power Control via Ensembling Deep Neural Networks

被引:170
|
作者
Liang, Fei [1 ,2 ]
Shen, Cong [3 ]
Yu, Wei [4 ]
Wu, Feng [1 ]
机构
[1] Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230026, Peoples R China
[2] Huawei Technol Co, Shanghai 201206, Peoples R China
[3] Univ Virginia, Charles L Brown Dept Elect & Comp Engn, Charlottesville, VA 22904 USA
[4] Univ Toronto, Elect & Comp Engn Dept, Toronto, ON M5S 3G4, Canada
基金
加拿大自然科学与工程研究理事会; 中国国家自然科学基金;
关键词
Power control; interference mitigation; deep neural networks (DNN); ensemble learning; SUM RATE MAXIMIZATION; ALLOCATION; COMPLEXITY;
D O I
10.1109/TCOMM.2019.2957482
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
A deep neural network (DNN) based power control method that aims at solving the non-convex optimization problem of maximizing the sum rate of a fading multi-user interference channel is proposed. Towards this end, we first present PCNet, which is a multi-layer fully connected neural network that is specifically designed for the power control problem. A key challenge in training a DNN for the power control problem is the lack of ground truth, i.e., the optimal power allocation is unknown. To address this issue, PCNet leverages the unsupervised learning strategy and directly maximizes the sum rate in the training phase. We then present PCNet+, which enhances the generalization capacity of PCNet by incorporating noise power as an input to the network. Observing that a single PCNet(+) does not universally outperform the existing solutions, we further propose ePCNet(+), a network ensemble with multiple PCNets(+) trained independently. Simulation results show that for the standard symmetric $K$ -user Gaussian interference channel, the proposed methods can outperform state-of-the-art power control solutions under a variety of system configurations. Furthermore, the performance improvement of ePCNet comes with a reduced computational complexity.
引用
收藏
页码:1760 / 1776
页数:17
相关论文
共 50 条
  • [31] Ensembling Graph Neural Networks for Node Classification
    Lin, Ke-Ao
    Xie, Xiao-Zhu
    Weng, Wei
    Chen, Yong
    Journal of Network Intelligence, 2024, 9 (02): : 804 - 818
  • [32] On the Expressive Power of Deep Neural Networks
    Raghu, Maithra
    Poole, Ben
    Kleinberg, Jon
    Ganguli, Surya
    Dickstein, Jascha Sohl
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [33] Regularizing Binary Neural Networks via Ensembling for Efficient Person Re-Identification
    Serbetci, Ayse
    Akgul, Yusuf Sinan
    IEEE ACCESS, 2023, 11 : 59446 - 59455
  • [34] Towards Optimal Power Control for Delay-Constrained Cognitive Radio Networks
    Abdalla, Iman
    ElBatt, Tamer
    Nafie, Mohammed
    Digham, Fadel
    2015 INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKING AND COMMUNICATIONS (ICNC), 2015, : 469 - 475
  • [35] Towards Trustworthy Outsourced Deep Neural Networks
    Ahmad, Louay
    Dong, Boxiang
    Samanthula, Bharath
    Wang, Ryan Yang
    Li, Bill Hui
    2021 IEEE CLOUD SUMMIT (CLOUD SUMMIT 2021), 2021, : 83 - 88
  • [36] Deep Neural Networks for Optimal Team Composition
    Sapienza, Anna
    Goyal, Palash
    Ferrara, Emilio
    FRONTIERS IN BIG DATA, 2019, 2
  • [37] Towards robust explanations for deep neural networks
    Dombrowski, Ann-Kathrin
    Anders, Christopher J.
    Mueller, Klaus-Robert
    Kessel, Pan
    PATTERN RECOGNITION, 2022, 121
  • [38] Towards Stochasticity of Regularization in Deep Neural Networks
    Sandjakoska, Ljubinka
    Bogdanova, Ana Madevska
    2018 14TH SYMPOSIUM ON NEURAL NETWORKS AND APPLICATIONS (NEUREL), 2018,
  • [39] Towards Robust Deep Neural Networks with BANG
    Rozsa, Andras
    Gunther, Manuel
    Boult, Terrance E.
    2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018), 2018, : 803 - 811
  • [40] Towards explainable deep neural networks (xDNN)
    Angelov, Plamen
    Soares, Eduardo
    NEURAL NETWORKS, 2020, 130 (130) : 185 - 194