Learning to Optimize: Training Deep Neural Networks for Interference Management

被引:494
|
作者
Sun, Haoran [1 ]
Chen, Xiangyi [1 ]
Shi, Qingjiang [2 ]
Hong, Mingyi [1 ]
Fu, Xiao [3 ]
Sidiropoulos, Nicholas D. [4 ]
机构
[1] Univ Minnesota, Dept Elect & Comp Engn, Minneapolis, MN 55455 USA
[2] Tongji Univ, Sch Software Engn, Shanghai, Peoples R China
[3] Oregon State Univ, Sch Elect Engn & Comp Sci, Corvallis, OR 97331 USA
[4] Univ Virginia, Dept Elect & Comp Engn, Charlottesville, VA 22904 USA
基金
美国国家科学基金会;
关键词
Optimization algorithms approximation; deep neural networks; interference management; WMMSE algorithm; COMPLEXITY; CAPACITY;
D O I
10.1109/TSP.2018.2866382
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Numerical optimization has played a central role in addressing key signal processing (SP) problems. Highly effective methods have been developed for a large variety of SP applications such as communications, radar, filter design, and speech and image analytics, just to name a few. However, optimization algorithms often entail considerable complexity, which creates a serious gap between theoretical design/analysis and real-time processing. In this paper, we aim at providing a new learning-based perspective to address this challenging issue. The key idea is to treat the input and output of an SP algorithm as an unknown nonlinear mapping and use a deep neural network (DNN) to approximate it. If the nonlinear mapping can be learned accurately by a DNN of moderate size, then SP tasks can be performed effectively-since passing the input through a DNN only requires a small number of simple operations. In our paper, we first identify a class of optimization algorithms that can be accurately approximated by a fully connected DNN. Second, to demonstrate the effectiveness of the proposed approach, we apply it to approximate a popular interference management algorithm, namely, the WMMSE algorithm. Extensive experiments using both synthetically generated wireless channel data and real DSL channel data have been conducted. It is shown that, in practice, only a small network is sufficient to obtain high approximation accuracy, and DNNs can achieve orders of magnitude speedup in computational time compared to the state-of-the-art interference management algorithm.
引用
收藏
页码:5438 / 5453
页数:16
相关论文
共 50 条
  • [21] Training deep quantum neural networks
    Beer, Kerstin
    Bondarenko, Dmytro
    Farrelly, Terry
    Osborne, Tobias J.
    Salzmann, Robert
    Scheiermann, Daniel
    Wolf, Ramona
    NATURE COMMUNICATIONS, 2020, 11 (01)
  • [22] A Novel Learning Algorithm to Optimize Deep Neural Networks: Evolved Gradient Direction Optimizer (EVGO)
    Karabayir, Ibrahim
    Akbilgic, Oguz
    Tas, Nihat
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (02) : 685 - 694
  • [23] Team Deep Neural Networks for Interference Channels
    de Kerret, Paul
    Gesbert, David
    Filippone, Maurizio
    2018 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS (ICC WORKSHOPS), 2018,
  • [24] Interference Classification Using Deep Neural Networks
    Yu, Jianyuan
    Alhassoun, Mohammad
    Buehrer, R. Michael
    2020 IEEE 92ND VEHICULAR TECHNOLOGY CONFERENCE (VTC2020-FALL), 2020,
  • [25] Identifying and training deep learning neural networks on biomedical-related datasets
    Woessner, Alan E.
    Anjum, Usman
    Salman, Hadi
    Lear, Jacob
    Turner, Jeffrey T.
    Campbell, Ross
    Beaudry, Laura
    Zhan, Justin
    Cornett, Lawrence E.
    Gauch, Susan
    Quinn, Kyle P.
    BRIEFINGS IN BIOINFORMATICS, 2024, 25
  • [26] An improved model training method for residual convolutional neural networks in deep learning
    Xuelei Li
    Rengang Li
    Yaqian Zhao
    Jian Zhao
    Multimedia Tools and Applications, 2021, 80 : 6811 - 6821
  • [27] Local Critic Training for Model-Parallel Learning of Deep Neural Networks
    Lee, Hojung
    Hsieh, Cho-Jui
    Lee, Jong-Seok
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (09) : 4424 - 4436
  • [28] Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks
    Hoefler, Torsten
    Alistarh, Dan
    Ben-Nun, Tal
    Dryden, Nikoli
    Peste, Alexandra
    Journal of Machine Learning Research, 2021, 22
  • [29] An improved model training method for residual convolutional neural networks in deep learning
    Li, Xuelei
    Li, Rengang
    Zhao, Yaqian
    Zhao, Jian
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (05) : 6811 - 6821
  • [30] Accelerating the Training of Convolutional Neural Networks for Image Segmentation with Deep Active Learning
    Chen, Weitao
    Salay, Rick
    Sedwards, Sean
    Abdelzad, Vahdat
    Czarnecki, Krzysztof
    2020 IEEE 23RD INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2020,