Trainable Preprocessing for Reduced Precision Neural Networks

被引:0
|
作者
Csordas, Gabor [3 ]
Denolf, Kristof [1 ,2 ]
Fraser, Nicholas [1 ,2 ]
Pappalardo, Alessandro [1 ,2 ]
Vissers, Kees [1 ,2 ]
机构
[1] Xilinx Res Labs, Longmont, CO USA
[2] Xilinx Res Labs, Dublin, Ireland
[3] Ecole Polytech Fed Lausanne, Lausanne, Switzerland
来源
29TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2021) | 2021年
关键词
Data Preprocessing; Quantized Neural Networks;
D O I
暂无
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Applications of neural networks are emerging in many fields and are frequently implemented in embedded environment, introducing power, throughput and latency constraints next to accuracy. Although practical computer vision solutions always involve some kind of preprocessing, most research focuses on the network itself. As a result, the preprocessing remains optimized for the human perception and is not tuned to neural networks. We propose the optimization of preprocesing along with the network using backpropagation and gradient descent. This open up the accuracy versus implementation cost design space towards more cost-efficient implementations by exploiting reduced precision input. In particular, we evaluate the effect of two preprocessing techniques: color conversion and dithering, using CIFAR10 and ImageNet datasets with different networks.
引用
收藏
页码:1546 / 1550
页数:5
相关论文
共 50 条
  • [1] Noise Resilience of Reduced Precision Neural Networks
    Sanjeet, Sai
    Boppana, Sannidhi
    Sahoo, Bibhu Datta
    Fujita, Masahiro
    THE PROCEEDINGS OF THE 13TH INTERNATIONAL SYMPOSIUM ON HIGHLY EFFICIENT ACCELERATORS AND RECONFIGURABLE TECHNOLOGIES, HEART 2023, 2023, : 114 - 118
  • [2] PREPROCESSING IN ATTRACTOR NEURAL NETWORKS
    CARVALHAES, CG
    COSTA, AT
    PENNA, TJP
    INTERNATIONAL JOURNAL OF MODERN PHYSICS C-PHYSICS AND COMPUTERS, 1995, 6 (01): : 1 - 10
  • [3] A vector quantization circuit for trainable neural networks
    Ancona, F
    Oddone, G
    Rovetta, S
    Uneddu, G
    Zunino, R
    ICECS 96 - PROCEEDINGS OF THE THIRD IEEE INTERNATIONAL CONFERENCE ON ELECTRONICS, CIRCUITS, AND SYSTEMS, VOLS 1 AND 2, 1996, : 1131 - 1134
  • [4] Trainable and explainable simplicial map neural networks
    Paluzo-Hidalgo, Eduardo
    Gonzalez-Diaz, Rocio
    Gutierrez-Naranjo, Miguel A.
    INFORMATION SCIENCES, 2024, 667
  • [5] Stability of Modular Recurrent Trainable Neural Networks
    Hernandez Manzano, Sergio Miguel
    Baruch, Ieroham
    NATURE-INSPIRED COMPUTATION AND MACHINE LEARNING, PT II, 2014, 8857 : 95 - 104
  • [6] Trainable quantization for Speedy Spiking Neural Networks
    Castagnetti, Andrea
    Pegatoquet, Alain
    Miramond, Benoit
    FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [7] Recurrent neural networks with trainable amplitude of activation functions
    Goh, SL
    Mandic, DP
    NEURAL NETWORKS, 2003, 16 (08) : 1095 - 1100
  • [8] IMPQ: REDUCED COMPLEXITY NEURAL NETWORKS VIA GRANULAR PRECISION ASSIGNMENT
    Gonugondla, Sujan Kumar
    Shanbhag, Naresh R.
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 66 - 70
  • [9] Impact of Reduced Precision in the Reliability of Deep Neural Networks for Object Detection
    dos Santos, Fernando Fernandes
    Navaux, Philippe
    Carro, Luigi
    Rech, Paolo
    2019 IEEE EUROPEAN TEST SYMPOSIUM (ETS), 2019,
  • [10] Incremental evolution of trainable neural networks that are backwards compatible
    Christenson, C
    Kaikhah, K
    PROCEEDINGS OF THE IASTED INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND APPLICATIONS, 2006, : 222 - +