A Layer-Wise Ensemble Technique for Binary Neural Network

被引:0
|
作者
Xi, Jiazhen [1 ]
Yamauchi, Hiroyuki [1 ]
机构
[1] Fukuoka Inst Technol, Dept Comp Sci & Engn, Higashi Ku, 3-30-1 Wajiro Higashi, Fukuoka 8110295, Japan
关键词
Machine learning; low-precision neural network; binary neural networks; ensemble learning;
D O I
10.1142/S021800142152011X
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Binary neural networks (BNNs) have drawn much attention because of the most promising techniques to meet the desired memory footprint and inference speed requirements. However, they still suffer from the severe intrinsic instability of the error convergence, resulting in increase in prediction error and its standard deviation, which is mostly caused by the inherently poor representation with only two possible values of -1 and +1. In this work, we have proposed a cost-aware layer-wise ensemble method to address the above issue without incurring any excessive costs, which is characterized by (1) layer-wise bagging and (2) cost-aware layer selection for the bagging. One of the experimental results has shown that the proposed method reduces the error and its standard deviation by 15% and 54% on CIFAR-10, respectively, compared to the BNN serving as a baseline. This paper demonstrated and discussed such error reduction and stability performance with high versatility based on the comparison results under the various cases of combinations of the network base model with the proposed and the state-of-the-art prior techniques while changing the network sizes and datasets of CIFAR-10, SVHN, and MNIST for the evaluation.
引用
收藏
页数:21
相关论文
共 50 条
  • [1] Automated layer-wise solution for ensemble deep randomized feed-forward neural network
    Hu, Minghui
    Gao, Ruobin
    Suganthan, Ponnuthurai N.
    Tanveer, M.
    [J]. NEUROCOMPUTING, 2022, 514 : 137 - 147
  • [2] Network with Sub-networks: Layer-wise Detachable Neural Network
    Fuengfusin, Ninnart
    Tamukoh, Hakaru
    [J]. JOURNAL OF ROBOTICS NETWORKING AND ARTIFICIAL LIFE, 2021, 7 (04): : 240 - 244
  • [3] Craft Distillation: Layer-wise Convolutional Neural Network Distillation
    Blakeney, Cody
    Li, Xiaomin
    Yan, Yan
    Zong, Ziliang
    [J]. 2020 7TH IEEE INTERNATIONAL CONFERENCE ON CYBER SECURITY AND CLOUD COMPUTING (CSCLOUD 2020)/2020 6TH IEEE INTERNATIONAL CONFERENCE ON EDGE COMPUTING AND SCALABLE CLOUD (EDGECOM 2020), 2020, : 252 - 257
  • [4] Temperature Balancing, Layer-wise Weight Analysis, and Neural Network Training
    Zhou, Yefan
    Pang, Tianyu
    Liu, Keqin
    Martin, Charles H.
    Mahoney, Michael W.
    Yang, Yaoqing
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [5] LAYER-WISE DEEP NEURAL NETWORK PRUNING VIA ITERATIVELY REWEIGHTED OPTIMIZATION
    Jiang, Tao
    Yang, Xiangyu
    Shi, Yuanming
    Wang, Hao
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 5606 - 5610
  • [6] Sensitivity-Oriented Layer-Wise Acceleration and Compression for Convolutional Neural Network
    Zhou, Wei
    Niu, Yue
    Zhang, Guanwen
    [J]. IEEE ACCESS, 2019, 7 : 38264 - 38272
  • [7] The layer-wise method and the backpropagation hybrid approach to learning a feedforward neural network
    Rubanov, NS
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2000, 11 (02): : 295 - 305
  • [8] FLEXIBLE NETWORK BINARIZATION WITH LAYER-WISE PRIORITY
    Wang, He
    Xu, Yi
    Ni, Bingbing
    Zhuang, Lixue
    Xu, Hongteng
    [J]. 2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2018, : 2346 - 2350
  • [9] Internal Node Bagging: a Layer-Wise Ensemble Training Method
    Li, Jinhong
    Yi, Shun
    [J]. 2019 2ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND PATTERN RECOGNITION (AIPR 2019), 2019, : 124 - 128
  • [10] Hierarchical Neural Network with Layer-wise Relevance Propagation for Interpretable Multiclass Neural State Classification
    Ellis, Charles A.
    Sendi, Mohammad S. E.
    Willie, Jon T.
    Mahmoudi, Babak
    [J]. 2021 10TH INTERNATIONAL IEEE/EMBS CONFERENCE ON NEURAL ENGINEERING (NER), 2021, : 351 - 354