Learning to Optimize: Training Deep Neural Networks for Interference Management

被引:494
|
作者
Sun, Haoran [1 ]
Chen, Xiangyi [1 ]
Shi, Qingjiang [2 ]
Hong, Mingyi [1 ]
Fu, Xiao [3 ]
Sidiropoulos, Nicholas D. [4 ]
机构
[1] Univ Minnesota, Dept Elect & Comp Engn, Minneapolis, MN 55455 USA
[2] Tongji Univ, Sch Software Engn, Shanghai, Peoples R China
[3] Oregon State Univ, Sch Elect Engn & Comp Sci, Corvallis, OR 97331 USA
[4] Univ Virginia, Dept Elect & Comp Engn, Charlottesville, VA 22904 USA
基金
美国国家科学基金会;
关键词
Optimization algorithms approximation; deep neural networks; interference management; WMMSE algorithm; COMPLEXITY; CAPACITY;
D O I
10.1109/TSP.2018.2866382
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Numerical optimization has played a central role in addressing key signal processing (SP) problems. Highly effective methods have been developed for a large variety of SP applications such as communications, radar, filter design, and speech and image analytics, just to name a few. However, optimization algorithms often entail considerable complexity, which creates a serious gap between theoretical design/analysis and real-time processing. In this paper, we aim at providing a new learning-based perspective to address this challenging issue. The key idea is to treat the input and output of an SP algorithm as an unknown nonlinear mapping and use a deep neural network (DNN) to approximate it. If the nonlinear mapping can be learned accurately by a DNN of moderate size, then SP tasks can be performed effectively-since passing the input through a DNN only requires a small number of simple operations. In our paper, we first identify a class of optimization algorithms that can be accurately approximated by a fully connected DNN. Second, to demonstrate the effectiveness of the proposed approach, we apply it to approximate a popular interference management algorithm, namely, the WMMSE algorithm. Extensive experiments using both synthetically generated wireless channel data and real DSL channel data have been conducted. It is shown that, in practice, only a small network is sufficient to obtain high approximation accuracy, and DNNs can achieve orders of magnitude speedup in computational time compared to the state-of-the-art interference management algorithm.
引用
收藏
页码:5438 / 5453
页数:16
相关论文
共 50 条
  • [31] Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
    Hoefler, Torsten
    Alistarh, Dan
    Ben-Nun, Tal
    Dryden, Nikoli
    Peste, Alexandra
    JOURNAL OF MACHINE LEARNING RESEARCH, 2021, 23
  • [32] Training Deep Neural Networks for Image Applications with Noisy Labels by Complementary Learning
    Zhou Y.
    Liu Y.
    Wang R.
    2017, Science Press (54): : 2649 - 2659
  • [33] Local to Global Learning: Gradually Adding Classes for Training Deep Neural Networks
    Cheng, Hao
    Lian, Dongze
    Deng, Bowen
    Gao, Shenghua
    Tan, Tao
    Geng, Yanlin
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4743 - 4751
  • [34] Demystifying Learning Rate Policies for High Accuracy Training of Deep Neural Networks
    Wu, Yanzhao
    Liu, Ling
    Bae, Juhyun
    Chow, Ka-Ho
    Iyengar, Arun
    Pu, Calton
    Wei, Wenqi
    Yu, Lei
    Zhang, Qi
    2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 1971 - 1980
  • [35] Dynamic Memory Management for GPU-based training of Deep Neural Networks
    Shriram, S. B.
    Garg, Anshuj
    Kulkarni, Purushottam
    2019 IEEE 33RD INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM (IPDPS 2019), 2019, : 200 - 209
  • [36] Power Control for Interference Management via Ensembling Deep Neural Networks (Invited Paper)
    Liang, Fei
    Shen, Cong
    Yu, Wei
    Wu, Feng
    2019 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA (ICCC), 2019,
  • [37] Appropriate Learning Rates of Adaptive Learning Rate Optimization Algorithms for Training Deep Neural Networks
    Iiduka, Hideaki
    IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (12) : 13250 - 13261
  • [38] Online Deep Learning: Learning Deep Neural Networks on the Fly
    Sahoo, Doyen
    Pham, Quang
    Lu, Jing
    Hoi, Steven C. H.
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 2660 - 2666
  • [39] Learning with Deep Photonic Neural Networks
    Leelar, Bhawani Shankar
    Shivaleela, E. S.
    Srinivas, T.
    2017 IEEE WORKSHOP ON RECENT ADVANCES IN PHOTONICS (WRAP), 2017,
  • [40] Deep Learning with Random Neural Networks
    Gelenbe, Erol
    Yin, Yongha
    PROCEEDINGS OF SAI INTELLIGENT SYSTEMS CONFERENCE (INTELLISYS) 2016, VOL 2, 2018, 16 : 450 - 462