Cascading photonic reservoirs with deep neural networks increases computational performance

被引:0
|
作者
Bauwens, Ian [1 ,2 ]
Van der Sande, Guy [1 ]
Bienstman, Peter [2 ]
Verschaffelt, Guy [1 ]
机构
[1] Vrije Univ Brussel, Appl Phys Res Grp, Pl Laan 2, B-1050 Brussels, Belgium
[2] Univ Ghent, Dept Informat Technol, Photon Res Grp, IMEC, Technol Pk Zwijnaarde 126, B-9052 Ghent, Belgium
来源
MACHINE LEARNING IN PHOTONICS | 2024年 / 13017卷
关键词
Preprocessor; deep neural network; delay-based reservoir computing; machine learning; semiconductor laser; feedback;
D O I
10.1117/12.3017209
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks (DNNs) have been successfully applied to solve complex problems, such as pattern recognition when analyzing big data. To achieve a good computational performance, these networks are often designed such that they contain a large number of trainable parameters. However, by doing so, DNNs are often very energy-intensive and time-consuming to train. In this work, we propose to use a photonic reservoir to preprocess the input data instead of directly injecting it into the DNN. A photonic reservoir consists of a network of many randomly connected nodes which do not need to be trained. It forms an additional layer to the deep neural network and can transform the input data into a state in a higher dimensional state-space. This allows us to reduce the size of the DNN, and the amount of training required for the DNN. We test this assumption using numerical simulations that show that such a photonic reservoir as preprocessor results in an improved performance, shown by a lower test error, for a deep neural network, when tested on the one-step ahead prediction task of the Santa Fe time-series. The performance of the stand-alone DNN is poor on this task, resulting in a high test error. As we also discuss in detail in [Bauwens et al, Frontiers in Physics 10, 1051941 (2022)], we conclude that photonic reservoirs are well-suited as physical preprocessors to deep neural networks for tackling time-dependent tasks due to their fast computation times and low-energy consumption.
引用
收藏
页数:5
相关论文
共 50 条
  • [31] Enhanced Computational Imaging with Encoding Masks Optimized by Deep Neural Networks
    Vogel, Bryan, I
    Finch, Michael F.
    Miller, Kevin J.
    INFRARED IMAGING SYSTEMS: DESIGN, ANALYSIS, MODELING, AND TESTING XXXII, 2021, 11740
  • [32] Computational modeling of mRNA degradation dynamics using deep neural networks
    Yaish, Ofir
    Orenstein, Yaron
    BIOINFORMATICS, 2022, 38 (04) : 1087 - 1101
  • [33] Computational memory-based inference and training of deep neural networks
    Sebastian, A.
    Boybat, I.
    Dazzi, M.
    Giannopoulos, I.
    Jonnalagadda, V.
    Joshi, V.
    Karunaratne, G.
    Kersting, B.
    Khaddam-Aljameh, R.
    Nandakumar, S. R.
    Petropoulos, A.
    Piveteau, C.
    Antonakopoulos, T.
    Rajendran, B.
    Le Gallo, M.
    Eleftheriou, E.
    2019 SYMPOSIUM ON VLSI CIRCUITS, 2019, : T168 - T169
  • [34] Deep neural networks are not a single hypothesis but a language for expressing computational hypotheses
    Golan, Tal
    Taylor, Johnmark
    Schutt, Heiko
    Peters, Benjamin
    Sommers, Rowan P.
    Seeliger, Katja
    Doerig, Adrien
    Linton, Paul
    Konkle, Talia
    van Gerven, Marcel
    Kording, Konrad
    Richards, Blake
    Kietzmann, Tim C.
    Lindsay, Grace W.
    Kriegeskorte, Nikolaus
    BEHAVIORAL AND BRAIN SCIENCES, 2023, 46
  • [35] Towards Efficient Visual Simplification of Computational Graphs in Deep Neural Networks
    Pan, Rusheng
    Wang, Zhiyong
    Wei, Yating
    Gao, Han
    Ou, Gongchang
    Cao, Caleb Chen
    Xu, Jingli
    Xu, Tong
    Chen, Wei
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2024, 30 (07) : 3359 - 3373
  • [36] Prediction of Stock Performance Using Deep Neural Networks
    Gu, Yanlei
    Shibukawa, Takuya
    Kondo, Yohei
    Nagao, Shintaro
    Kamijo, Shunsuke
    APPLIED SCIENCES-BASEL, 2020, 10 (22): : 1 - 20
  • [37] High performance accelerators for deep neural networks: A review
    Akhoon, Mohd Saqib
    Suandi, Shahrel A.
    Alshahrani, Abdullah
    Saad, Abdul-Malik H. Y.
    Albogamy, Fahad R.
    Bin Abdullah, Mohd Zaid
    Loan, Sajad A.
    EXPERT SYSTEMS, 2022, 39 (01)
  • [38] Performance evaluation of reduced complexity deep neural networks
    Agha, Shahrukh
    Nazir, Sajid
    Kaleem, Mohammad
    Najeeb, Faisal
    Talat, Rehab
    PLOS ONE, 2025, 20 (03):
  • [39] Performance of Training Sparse Deep Neural Networks on GPUs
    Wang, Jianzong
    Huang, Zhangcheng
    Kong, Lingwei
    Xiao, Jing
    Wang, Pengyu
    Zhang, Lu
    Li, Chao
    2019 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE (HPEC), 2019,
  • [40] USING DEEP NEURAL NETWORKS TO PREDICT COGNITIVE PERFORMANCE
    Koorathota, Sharath C.
    Liu, Grace P.
    Lachman, Margie
    Sloan, Richard P.
    PSYCHOSOMATIC MEDICINE, 2020, 82 (06): : A116 - A116