Sparse Ternary Connect: Convolutional Neural Networks Using Ternarized Weights with Enhanced Sparsity

被引:0
|
作者
Jin, Canran [1 ]
Sun, Heming [1 ]
Kimura, Shinji [1 ]
机构
[1] Waseda Univ, Grad Sch Informat Prod & Syst, Wakamatsu Ku, 2-7 Hibikino, Kitakyushu, Fukuoka 8080135, Japan
关键词
D O I
暂无
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Convolutional Neural Networks (CNNs) are indispensable in a wide range of tasks to achieve state-of-the-art results. In this work, we exploit ternary weights in both inference and training of CNNs and further propose Sparse Ternary Connect (STC) where kernel weights in float value are converted to 1, -1 and 0 based on a new conversion rule with the controlled ratio of 0. STC can save hardware resource a lot with small degradation of precision. The experimental evaluation on 2 popular datasets (CIFAR-10 and SVHN) shows that the proposed method can reduce resource utilization (by 28.9% of LUT, 25.3% of FF, 97.5% of DSP and 88.7% of BRAM on Xilinx Kintex-7 FPGA) with less than 0.5% accuracy loss.
引用
收藏
页码:190 / 195
页数:6
相关论文
共 50 条
  • [31] Dynamic Block Sparse Reparameterization of Convolutional Neural Networks
    Vooturi, Dharma Teja
    Varma, Girish
    Kothapalli, Kishore
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 3046 - 3053
  • [32] Photoacoustic microscopy with sparse data by convolutional neural networks
    Zhou, Jiasheng
    He, Da
    Shang, Xiaoyu
    Guo, Zhendong
    Chen, Sung-Liang
    Luo, Jiajia
    PHOTOACOUSTICS, 2021, 22
  • [33] TileNET: Hardware accelerator for ternary Convolutional Neural Networks
    Eetha, Sagar
    Sruthi, P. K.
    Pant, Vibha
    Vikram, Sai
    Mody, Mihir
    Purnaprajna, Madhura
    MICROPROCESSORS AND MICROSYSTEMS, 2021, 83
  • [34] Hybrid Approach for Efficient Quantization of Weights in Convolutional Neural Networks
    Seo, Sanghyun
    Kim, Juntae
    2018 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (BIGCOMP), 2018, : 638 - 641
  • [35] Convolutional Fuzzy Neural Networks With Random Weights for Image Classification
    Wang, Yifan
    Ishibuchi, Hisao
    Pedrycz, Witold
    Zhu, Jihua
    Cao, Xiangyong
    Wang, Jun
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024, 8 (05): : 3279 - 3293
  • [36] Exploring Local Transformation Shared Weights in Convolutional Neural Networks
    Ghosh, Rohan
    Gupta, Anupam K.
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: WORKSHOP AND SPECIAL SESSIONS, 2019, 11731 : 377 - 390
  • [37] A digital watermarking scheme for protecting weights of convolutional neural networks
    Zhang, Yi-Jia
    Fan, Hang-Yu
    Lu, Zhe-Ming
    Journal of Network Intelligence, 2020, 5 (03): : 157 - 165
  • [38] Pipelined Training with Stale Weights in Deep Convolutional Neural Networks
    Zhang, Lifu
    Abdelrahman, Tarek S.
    APPLIED COMPUTATIONAL INTELLIGENCE AND SOFT COMPUTING, 2021, 2021
  • [39] Sparsity Enables Data and Energy Efficient Spiking Convolutional Neural Networks
    Bhatt, Varun
    Ganguly, Udayan
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2018, PT I, 2018, 11139 : 263 - 272
  • [40] SparseTrain: Exploiting Dataflow Sparsity for Efficient Convolutional Neural Networks Training
    Dai, Pengcheng
    Yang, Jianlei
    Ye, Xucheng
    Cheng, Xingzhou
    Luo, Junyu
    Song, Linghao
    Chen, Yiran
    Zhao, Weisheng
    PROCEEDINGS OF THE 2020 57TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2020,