Energy Propagation in Deep Convolutional Neural Networks

被引:6
|
作者
Wiatowski, Thomas [1 ]
Grohs, Philipp [2 ]
Boelcskei, Helmut [1 ]
机构
[1] Swiss Fed Inst Technol, Dept Informat Technol & Elect Engn, CH-8092 Zurich, Switzerland
[2] Univ Vienna, Fac Math, A-1090 Vienna, Austria
关键词
Machine learning; deep convolutional neural networks; scattering networks; energy decay and conservation; frame theory; FRAMES;
D O I
10.1109/TIT.2017.2756880
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Many practical machine learning tasks employ very deep convolutional neural networks. Such large depths pose formidable computational challenges in training and operating the network. It is therefore important to understand how fast the energy contained in the propagated signals (a.k.a. feature maps) decays across layers. In addition, it is desirable that the feature extractor generated by the network be informative in the sense of the only signal mapping to the all-zeros feature vector being the zero input signal. This "trivial null-set" property can be accomplished by asking for "energy conservation" in the sense of the energy in the feature vector being proportional to that of the corresponding input signal. This paper establishes conditions for energy conservation (and thus for a trivial null-set) for a wide class of deep convolutional neural network-based feature extractors and characterizes corresponding feature map energy decay rates. Specifically, we consider general scattering networks employing the modulus non-linearity and we find that under mild analyticity and high-pass conditions on the filters (which encompass, inter alia, various constructions of Weyl-Heisenberg filters, wavelets, ridgelets, (alpha)-curvelets, and shearlets) the feature map energy decays at least polynomially fast. For broad families of wavelets and Weyl-Heisenberg filters, the guaranteed decay rate is shown to be exponential. Moreover, we provide handy estimates of the number of layers needed to have at least ((1 - epsilon) center dot 100)% of the input signal energy be contained in the feature vector.
引用
收藏
页码:4819 / 4842
页数:24
相关论文
共 50 条
  • [41] Very Deep Convolutional Neural Networks for LVCSR
    Bi, Mengxiao
    Qian, Yanmin
    Yu, Kai
    [J]. 16TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2015), VOLS 1-5, 2015, : 3259 - 3263
  • [42] Elastography mapped by deep convolutional neural networks
    LIU DongXu
    KRUGGEL Frithjof
    SUN LiZhi
    [J]. Science China Technological Sciences, 2021, 64 (07) : 1567 - 1574
  • [43] Universal Consistency of Deep Convolutional Neural Networks
    Lin, Shao-Bo
    Wang, Kaidong
    Wang, Yao
    Zhou, Ding-Xuan
    [J]. IEEE TRANSACTIONS ON INFORMATION THEORY, 2022, 68 (07) : 4610 - 4617
  • [44] Deep convolutional neural networks in the face of caricature
    Hill, Matthew Q.
    Parde, Connor J.
    Castillo, Carlos D.
    Colon, Y. Ivette
    Ranjan, Rajeev
    Chen, Jun-Cheng
    Blanz, Volker
    O'Toole, Alice J.
    [J]. NATURE MACHINE INTELLIGENCE, 2019, 1 (11) : 522 - 529
  • [45] Refining Architectures of Deep Convolutional Neural Networks
    Shankar, Sukrit
    Robertson, Duncan
    Ioannou, Yani
    Criminisi, Antonio
    Cipolla, Roberto
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 2212 - 2220
  • [46] WEATHER CLASSIFICATION WITH DEEP CONVOLUTIONAL NEURAL NETWORKS
    Elhoseiny, Mohamed
    Huang, Sheng
    Elgammal, Ahmed
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2015, : 3349 - 3353
  • [47] Plankton Classification with Deep Convolutional Neural Networks
    Ouyang Py
    Hu Hong
    Shi Zhongzhi
    [J]. 2016 IEEE INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC), 2016, : 132 - 136
  • [48] xDNN: Inference for Deep Convolutional Neural Networks
    D'Alberto, Paolo
    Wu, Victor
    Ng, Aaron
    Nimaiyar, Rahul
    Delaye, Elliott
    Sirasao, Ashish
    [J]. ACM TRANSACTIONS ON RECONFIGURABLE TECHNOLOGY AND SYSTEMS, 2022, 15 (02)
  • [49] Survey on Deep Convolutional Neural Networks in Mammography
    Abdelhafiz, Dina
    Nabavi, Sheida
    Ammar, Reda
    Yang, Clifford
    [J]. 2017 IEEE 7TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL ADVANCES IN BIO AND MEDICAL SCIENCES (ICCABS), 2017,
  • [50] IMPROVEMENTS TO DEEP CONVOLUTIONAL NEURAL NETWORKS FOR LVCSR
    Sainath, Tara N.
    Kingsbury, Brian
    Mohamed, Abdel-rahman
    Dahl, George E.
    Saon, George
    Soltau, Hagen
    Beran, Tomas
    Aravkin, Aleksandr Y.
    Ramabhadran, Bhuvana
    [J]. 2013 IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING (ASRU), 2013, : 315 - 320