Quantization-aware training for low precision photonic neural networks

被引:7
|
作者
Kirtas, M. [1 ]
Oikonomou, A. [1 ]
Passalis, N. [1 ]
Mourgias-Alexandris, G. [2 ]
Moralis-Pegios, M. [2 ]
Pleros, N. [2 ]
Tefas, A. [1 ]
机构
[1] Aristotle Univ Thessaloniki, Dept Informat, Computat Intelligence & Deep Learning Grp, Thessaloniki, Greece
[2] Aristotle Univ Thessaloniki, Dept Informat, Wireless & Photon Syst & Networks Grp, Thessaloniki, Greece
关键词
Photonic deep learning; Neural network quantization; Constrained -aware training;
D O I
10.1016/j.neunet.2022.09.015
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advances in Deep Learning (DL) fueled the interest in developing neuromorphic hardware accel-erators that can improve the computational speed and energy efficiency of existing accelerators. Among the most promising research directions towards this is photonic neuromorphic architectures, which can achieve femtojoule per MAC efficiencies. Despite the benefits that arise from the use of neuromorphic architectures, a significant bottleneck is the use of expensive high-speed and precision analog-to-digital (ADCs) and digital-to-analog conversion modules (DACs) required to transfer the electrical signals, originating from the various Artificial Neural Networks (ANNs) operations (inputs, weights, etc.) in the photonic optical engines. The main contribution of this paper is to study quantization phenomena in photonic models, induced by DACs/ADCs, as an additional noise/uncertainty source and to provide a photonics-compliant framework for training photonic DL models with limited precision, allowing for reducing the need for expensive high precision DACs/ADCs. The effectiveness of the proposed method is demonstrated using different architectures, ranging from fully connected and convolutional networks to recurrent architectures, following recent advances in photonic DL.(c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页码:561 / 573
页数:13
相关论文
共 50 条
  • [1] Mixed-precision quantization-aware training for photonic neural networks
    Kirtas, Manos
    Passalis, Nikolaos
    Oikonomou, Athina
    Moralis-Pegios, Miltos
    Giamougiannis, George
    Tsakyridis, Apostolos
    Mourgias-Alexandris, George
    Pleros, Nikolaos
    Tefas, Anastasios
    [J]. NEURAL COMPUTING & APPLICATIONS, 2023, 35 (29): : 21361 - 21379
  • [2] Mixed-precision quantization-aware training for photonic neural networks
    Manos Kirtas
    Nikolaos Passalis
    Athina Oikonomou
    Miltos Moralis-Pegios
    George Giamougiannis
    Apostolos Tsakyridis
    George Mourgias-Alexandris
    Nikolaos Pleros
    Anastasios Tefas
    [J]. Neural Computing and Applications, 2023, 35 : 21361 - 21379
  • [3] Low Precision Quantization-aware Training in Spiking Neural Networks with Differentiable Quantization Function
    Shymyrbay, Ayan
    Fouda, Mohammed E.
    Eltawil, Ahmed
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [4] A Robust, Quantization-Aware Training Method for Photonic Neural Networks
    Oikonomou, A.
    Kirtas, M.
    Passalis, N.
    Mourgias-Alexandris, G.
    Moralis-Pegios, M.
    Pleros, N.
    Tefas, A.
    [J]. ENGINEERING APPLICATIONS OF NEURAL NETWORKS, EAAAI/EANN 2022, 2022, 1600 : 427 - 438
  • [5] Approximation- and Quantization-Aware Training for Graph Neural Networks
    Novkin, Rodion
    Klemme, Florian
    Amrouch, Hussam
    [J]. IEEE TRANSACTIONS ON COMPUTERS, 2024, 73 (02) : 599 - 612
  • [6] SQUAT: Stateful Quantization-Aware Training in Recurrent Spiking Neural Networks
    Venkatesh, Sreyes
    Marinescu, Razvan
    Eshraghian, Jason K.
    [J]. 2024 NEURO INSPIRED COMPUTATIONAL ELEMENTS CONFERENCE, NICE, 2024,
  • [7] Training-aware Low Precision Quantization in Spiking Neural Networks
    Shymyrbay, Ayan
    Fouda, Mohammed E.
    Eltawil, Ahmed
    [J]. 2022 56TH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2022, : 1147 - 1151
  • [8] Quantization-Aware Interval Bound Propagation for Training Certifiably Robust Quantized Neural Networks
    Lechner, Mathias
    Zikelic, Dorde
    Chatterjee, Krishnendu
    Henzinger, Thomas A.
    Rus, Daniela
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 12, 2023, : 14964 - 14973
  • [9] Overcoming Oscillations in Quantization-Aware Training
    Nagel, Markus
    Fournarakis, Marios
    Bondarenko, Yelysei
    Blankevoort, Tijmen
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [10] Disentangled Loss for Low-Bit Quantization-Aware Training
    Allenet, Thibault
    Briand, David
    Bichler, Olivier
    Sentieys, Olivier
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 2787 - 2791