Quantization-aware training for low precision photonic neural networks

被引:16
|
作者
Kirtas, M. [1 ]
Oikonomou, A. [1 ]
Passalis, N. [1 ]
Mourgias-Alexandris, G. [2 ]
Moralis-Pegios, M. [2 ]
Pleros, N. [2 ]
Tefas, A. [1 ]
机构
[1] Aristotle Univ Thessaloniki, Dept Informat, Computat Intelligence & Deep Learning Grp, Thessaloniki, Greece
[2] Aristotle Univ Thessaloniki, Dept Informat, Wireless & Photon Syst & Networks Grp, Thessaloniki, Greece
关键词
Photonic deep learning; Neural network quantization; Constrained -aware training;
D O I
10.1016/j.neunet.2022.09.015
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advances in Deep Learning (DL) fueled the interest in developing neuromorphic hardware accel-erators that can improve the computational speed and energy efficiency of existing accelerators. Among the most promising research directions towards this is photonic neuromorphic architectures, which can achieve femtojoule per MAC efficiencies. Despite the benefits that arise from the use of neuromorphic architectures, a significant bottleneck is the use of expensive high-speed and precision analog-to-digital (ADCs) and digital-to-analog conversion modules (DACs) required to transfer the electrical signals, originating from the various Artificial Neural Networks (ANNs) operations (inputs, weights, etc.) in the photonic optical engines. The main contribution of this paper is to study quantization phenomena in photonic models, induced by DACs/ADCs, as an additional noise/uncertainty source and to provide a photonics-compliant framework for training photonic DL models with limited precision, allowing for reducing the need for expensive high precision DACs/ADCs. The effectiveness of the proposed method is demonstrated using different architectures, ranging from fully connected and convolutional networks to recurrent architectures, following recent advances in photonic DL.(c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页码:561 / 573
页数:13
相关论文
共 50 条
  • [31] Inertial Measurement Unit Self-Calibration by Quantization-Aware and Memory-Parsimonious Neural Networks
    Cardoni, Matteo
    Pau, Danilo Pietro
    Rezaei, Kiarash
    Mura, Camilla
    ELECTRONICS, 2024, 13 (21)
  • [32] Quantization-Aware and Tensor-Compressed Training of Transformers for Natural Language Understanding
    Yang, Zi
    Choudhary, Samridhi
    Kunzmann, Siegfried
    Zhang, Zheng
    INTERSPEECH 2023, 2023, : 3292 - 3296
  • [33] FLOPSYNC-QACS: Quantization-aware clock synchronization for wireless sensor networks
    Terraneo, Federico
    Papadopoulos, Alessandro Vittorio
    Leva, Alberto
    Prandini, Maria
    JOURNAL OF SYSTEMS ARCHITECTURE, 2017, 80 : 77 - 84
  • [34] QUANTIZATION-AWARE PARAMETER ESTIMATION FOR AUDIO UPMIXING
    Rohlfing, Christian
    Liutkus, Antoine
    Becker, Julian M.
    2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2017, : 746 - 750
  • [35] Quantization-Aware Pruning Criterion for Industrial Applications
    Gil, Yoonhee
    Park, Jong-Hyeok
    Baek, Jongchan
    Han, Soohee
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2022, 69 (03) : 3203 - 3213
  • [36] Multi-Wavelength Parallel Training and Quantization-Aware Tuning for WDM-Based Optical Convolutional Neural Networks ConsideringWavelength-Relative Deviations
    Zhu, Ying
    Liu, Min
    Xu, Lu
    Wang, Lei
    Xiao, Xi
    Yu, Shaohua
    2023 28TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, ASP-DAC, 2023, : 384 - 389
  • [37] Quantization-Aware Neural Architecture Search with Hyperparameter Optimization for Industrial Predictive Maintenance Applications
    van de Waterlaat, Nick
    Vogel, Sebastian
    Rodriguez, Hiram Rayo Torres
    Sanberg, Willem
    Daalderop, Gerardo
    2023 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION, DATE, 2023,
  • [38] Knowledge-guided quantization-aware training for EEG-based emotion recognition
    Zhong, Sheng-hua
    Shi, Jiahao
    Wang, Yi
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2025, 108
  • [39] Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations
    Hubara, Itay
    Courbariaux, Matthieu
    Soudry, Daniel
    El-Yaniv, Ran
    Bengio, Yoshua
    JOURNAL OF MACHINE LEARNING RESEARCH, 2018, 18
  • [40] A reconfigurable multi-precision quantization-aware nonlinear activation function hardware module for DNNs
    Hong, Qi
    Liu, Zhiming
    Long, Qiang
    Tong, Hao
    Zhang, Tianxu
    Zhu, Xiaowen
    Zhao, Yunong
    Ru, Hua
    Zha, Yuxing
    Zhou, Ziyuan
    Wu, Jiashun
    Tan, Hongtao
    Hong, Weiqiang
    Xu, Yaohua
    Guo, Xiaohui
    MICROELECTRONICS JOURNAL, 2024, 151