Deep Neural Networks with Efficient Guaranteed Invariances

被引:0
|
作者
Rath, Matthias [1 ,2 ]
Condurache, Alexandru Paul [1 ,2 ]
机构
[1] Robert Bosch GmbH, Cross Domain Comp Solut, Stuttgart, Germany
[2] Univ Lubeck, Inst Signal Proc, Lubeck, Germany
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We address the problem of improving the performance and in particular the sample complexity of deep neural networks by enforcing and guaranteeing invariances to symmetry transformations rather than learning them from data. Group-equivariant convolutions are a popular approach to obtain equivariant representations. The desired corresponding invariance is then imposed using pooling operations. For rotations, it has been shown that using invariant integration instead of pooling further improves the sample complexity. In this contribution, we first expand invariant integration beyond rotations to flips and scale transformations. We then address the problem of incorporating multiple desired invariances into a single network. For this purpose, we propose a multi-stream architecture, where each stream is invariant to a different transformation such that the network can simultaneously benefit from multiple invariances. We demonstrate our approach with successful experiments on Scaled-MNIST, SVHN, CIFAR-10 and STL-10.
引用
收藏
页数:21
相关论文
共 50 条
  • [31] Trained Rank Pruning for Efficient Deep Neural Networks
    Xu, Yuhui
    Li, Yuxi
    Zhang, Shuai
    Wen, Wei
    Wang, Botao
    Dai, Wenrui
    Qi, Yingyong
    Chen, Yiran
    Lin, Weiyao
    Xiong, Hongkai
    FIFTH WORKSHOP ON ENERGY EFFICIENT MACHINE LEARNING AND COGNITIVE COMPUTING - NEURIPS EDITION (EMC2-NIPS 2019), 2019, : 14 - 17
  • [32] Space Efficient Quantization for Deep Convolutional Neural Networks
    Dong-Di Zhao
    Fan Li
    Kashif Sharif
    Guang-Min Xia
    Yu Wang
    Journal of Computer Science and Technology, 2019, 34 : 305 - 317
  • [33] Deep neural networks for efficient steganographic payload location
    Yu Sun
    Hao Zhang
    Tao Zhang
    Ran Wang
    Journal of Real-Time Image Processing, 2019, 16 : 635 - 647
  • [34] TermiNETor: Early Convolution Termination for Efficient Deep Neural Networks
    Mallappa, Uday
    Gangwar, Pranav
    Khaleghi, Behnam
    Yang, Haichao
    Rosing, Tajana
    2022 IEEE 40TH INTERNATIONAL CONFERENCE ON COMPUTER DESIGN (ICCD 2022), 2022, : 635 - 643
  • [35] RECOM: An Efficient Resistive Accelerator for Compressed Deep Neural Networks
    Ji, Houxiang
    Song, Linghao
    Jiang, Li
    Li, Ha
    Chen, Yiran
    PROCEEDINGS OF THE 2018 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE), 2018, : 237 - 240
  • [36] An Efficient Implementation of Deep Convolutional Neural Networks for MRI Segmentation
    Hoseini, Farnaz
    Shahbahrami, Asadollah
    Bayat, Peyman
    JOURNAL OF DIGITAL IMAGING, 2018, 31 (05) : 738 - 747
  • [37] An Overview of Efficient Interconnection Networks for Deep Neural Network Accelerators
    Nabavinejad, Seyed Morteza
    Baharloo, Mohammad
    Chen, Kun-Chih
    Palesi, Maurizio
    Kogel, Tim
    Ebrahimi, Masoumeh
    IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2020, 10 (03) : 268 - 282
  • [38] Efficient Hardware Architectures for Accelerating Deep Neural Networks: Survey
    Dhilleswararao, Pudi
    Boppu, Srinivas
    Manikandan, M. Sabarimalai
    Cenkeramaddi, Linga Reddy
    IEEE ACCESS, 2022, 10 : 131788 - 131828
  • [39] Efficient Execution of Deep Neural Networks on Mobile Devices with NPU
    Tan, Tianxiang
    Cao, Guohong
    IPSN'21: PROCEEDINGS OF THE 20TH ACM/IEEE CONFERENCE ON INFORMATION PROCESSING IN SENSOR NETWORKS, 2021, : 283 - 298
  • [40] SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
    Faraone, Julian
    Fraser, Nicholas
    Blott, Michaela
    Leong, Philip H. W.
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 4300 - 4309