Energy Propagation in Deep Convolutional Neural Networks

被引:6
|
作者
Wiatowski, Thomas [1 ]
Grohs, Philipp [2 ]
Boelcskei, Helmut [1 ]
机构
[1] Swiss Fed Inst Technol, Dept Informat Technol & Elect Engn, CH-8092 Zurich, Switzerland
[2] Univ Vienna, Fac Math, A-1090 Vienna, Austria
关键词
Machine learning; deep convolutional neural networks; scattering networks; energy decay and conservation; frame theory; FRAMES;
D O I
10.1109/TIT.2017.2756880
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Many practical machine learning tasks employ very deep convolutional neural networks. Such large depths pose formidable computational challenges in training and operating the network. It is therefore important to understand how fast the energy contained in the propagated signals (a.k.a. feature maps) decays across layers. In addition, it is desirable that the feature extractor generated by the network be informative in the sense of the only signal mapping to the all-zeros feature vector being the zero input signal. This "trivial null-set" property can be accomplished by asking for "energy conservation" in the sense of the energy in the feature vector being proportional to that of the corresponding input signal. This paper establishes conditions for energy conservation (and thus for a trivial null-set) for a wide class of deep convolutional neural network-based feature extractors and characterizes corresponding feature map energy decay rates. Specifically, we consider general scattering networks employing the modulus non-linearity and we find that under mild analyticity and high-pass conditions on the filters (which encompass, inter alia, various constructions of Weyl-Heisenberg filters, wavelets, ridgelets, (alpha)-curvelets, and shearlets) the feature map energy decays at least polynomially fast. For broad families of wavelets and Weyl-Heisenberg filters, the guaranteed decay rate is shown to be exponential. Moreover, we provide handy estimates of the number of layers needed to have at least ((1 - epsilon) center dot 100)% of the input signal energy be contained in the feature vector.
引用
收藏
页码:4819 / 4842
页数:24
相关论文
共 50 条
  • [1] Deep convolutional neural networks for uncertainty propagation in random fields
    Luo, Xihaier
    Kareem, Ahsan
    [J]. COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, 2019, 34 (12) : 1043 - 1054
  • [2] Predicting the propagation of acoustic waves using deep convolutional neural networks
    Alguacil, Antonio
    Bauerheim, Michael
    Jacob, Marc C.
    Moreau, Stephane
    [J]. JOURNAL OF SOUND AND VIBRATION, 2021, 512
  • [3] Cellular Network Radio Propagation Modeling with Deep Convolutional Neural Networks
    Zhang, Xin
    Shu, Xiujun
    Zhang, Bingwen
    Ren, Jie
    Zhou, Lizhou
    Chen, Xin
    [J]. KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 2378 - 2386
  • [4] Deep Convolutional Neural Networks
    Gonzalez, Rafael C.
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2018, 35 (06) : 79 - 87
  • [5] Radio Propagation Prediction Model Using Convolutional Neural Networks by Deep Learning
    Imai, T.
    Kitao, K.
    Inomata, M.
    [J]. 2019 13TH EUROPEAN CONFERENCE ON ANTENNAS AND PROPAGATION (EUCAP), 2019,
  • [6] Energy Efficient Techniques using FFT for Deep Convolutional Neural Networks
    Nhan Nguyen-Thanh
    Han Le-Duc
    Duc-Tuyen Ta
    Van-Tam Nguyen
    [J]. PROCEEDINGS OF THE 2016 INTERNATIONAL CONFERENCE ON ADVANCED TECHNOLOGIES FOR COMMUNICATIONS (ATC), 2016, : 231 - 236
  • [7] Evaluating the Energy Efficiency of Deep Convolutional Neural Networks on CPUs and GPUs
    Li, Da
    Chen, Xinbo
    Becchi, Michela
    Zong, Ziliang
    [J]. PROCEEDINGS OF 2016 IEEE INTERNATIONAL CONFERENCES ON BIG DATA AND CLOUD COMPUTING (BDCLOUD 2016) SOCIAL COMPUTING AND NETWORKING (SOCIALCOM 2016) SUSTAINABLE COMPUTING AND COMMUNICATIONS (SUSTAINCOM 2016) (BDCLOUD-SOCIALCOM-SUSTAINCOM 2016), 2016, : 477 - 484
  • [8] Deep Anchored Convolutional Neural Networks
    Huang, Jiahui
    Dwivedi, Kshitij
    Roig, Gemma
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2019), 2019, : 639 - 647
  • [9] DEEP CONVOLUTIONAL NEURAL NETWORKS FOR LVCSR
    Sainath, Tara N.
    Mohamed, Abdel-rahman
    Kingsbury, Brian
    Ramabhadran, Bhuvana
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 8614 - 8618
  • [10] Deep Unitary Convolutional Neural Networks
    Chang, Hao-Yuan
    Wang, Kang L.
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT II, 2021, 12892 : 170 - 181