Efficient training and design of photonic neural network through neuroevolution

被引:55
|
作者
Zhang, Tian [1 ]
Wang, Jia [1 ]
Dan, Yihang [1 ]
Lanqiu, Yuxiang [1 ]
Dai, Jian [1 ]
Han, Xu [2 ]
Sun, Xiaojuan [3 ]
Xu, Kun [1 ]
机构
[1] Beijing Univ Posts & Telecommun, State Key Lab Informat Photon & Opt Commun, Beijing 100876, Peoples R China
[2] Huawei Technol Co Ltd, Shenzhen 518129, Guangdong, Peoples R China
[3] Beijing Univ Posts & Telecommun, Sch Sci, Beijing 100876, Peoples R China
来源
OPTICS EXPRESS | 2019年 / 27卷 / 26期
基金
中国国家自然科学基金;
关键词
GENETIC ALGORITHM; INVERSE DESIGN; OPTIMIZATION;
D O I
10.1364/OE.27.037150
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
Recently, optical neural networks (ONNs) integrated into photonic chips have received extensive attention because they are expected to implement the same pattern recognition tasks in electronic platforms with high efficiency and low power consumption. However, there are no efficient learning algorithms for the training of ONNs on an on-chip integration system. In this article, we propose a novel learning strategy based on neuroevolution to design and train ONNs. Two typical neuroevolution algorithms are used to determine the hyper-parameters of ONNs and to optimize the weights (phase shifters) in the connections. To demonstrate the effectiveness of the training algorithms, the trained ONNs are applied in classification tasks for an iris plants dataset, a wine recognition dataset and modulation formats recognition. The calculated results demonstrate that the accuracy and stability of the training algorithms based on neuroevolution are competitive with other traditional learning algorithms. In comparison to previous works, we introduce an efficient training method for ONNs and demonstrate their broad application prospects in pattern recognition, reinforcement learning and so on. (C) 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
引用
收藏
页码:37150 / 37163
页数:14
相关论文
共 50 条
  • [31] Multi-Objective Hyperparameter Optimization for Spiking Neural Network Neuroevolution
    Parsa, Maryam
    Kulkarni, Shruti R.
    Coletti, Mark
    Bassett, Jeffrey
    Mitchell, J. Parker
    Schuman, Catherine D.
    [J]. 2021 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC 2021), 2021, : 1225 - 1232
  • [32] Gist: Efficient Data Encoding for Deep Neural Network Training
    Jain, Animesh
    Phanishayee, Amar
    Mars, Jason
    Tang, Lingjia
    Pekhimenko, Gennady
    [J]. 2018 ACM/IEEE 45TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA), 2018, : 776 - 789
  • [33] Efficient partition of learning data sets for neural network training
    Inst. of Bioorg. and Petrol. Chem., Kiev, Ukraine
    不详
    [J]. Neural Netw., 8 (1361-1374):
  • [34] Efficient I/O for Neural Network Training with Compressed Data
    Zhang, Zhao
    Huang, Lei
    Pauloski, J. Gregory
    Foster, Ian T.
    [J]. 2020 IEEE 34TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM IPDPS 2020, 2020, : 409 - 418
  • [35] Efficient Learning Rate Adaptation for Convolutional Neural Network Training
    Georgakopoulos, Spiros V.
    Plagianakos, Vassilis P.
    [J]. 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [36] Efficient recurrent neural network training incorporating a priori knowledge
    Dimopoulos, KP
    Kambhampati, C
    Craddock, R
    [J]. MATHEMATICS AND COMPUTERS IN SIMULATION, 2000, 52 (02) : 137 - 162
  • [37] Efficient partition of learning data sets for neural network training
    Tetko, IV
    Villa, AEP
    [J]. NEURAL NETWORKS, 1997, 10 (08) : 1361 - 1374
  • [38] Rubik: A Hierarchical Architecture for Efficient Graph Neural Network Training
    Chen, Xiaobing
    Wang, Yuke
    Xie, Xinfeng
    Hu, Xing
    Basak, Abanti
    Liang, Ling
    Yan, Mingyu
    Deng, Lei
    Ding, Yufei
    Du, Zidong
    Xie, Yuan
    [J]. IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2022, 41 (04) : 936 - 949
  • [39] Efficient training for the hybrid optical diffractive deep neural network
    Fang, Tao
    Lia, Jingwei
    Wu, Tongyu
    Cheng, Ming
    Dong, Xiaowen
    [J]. AI AND OPTICAL DATA SCIENCES III, 2022, 12019
  • [40] ByteGNN: Efficient Graph Neural Network Training at Large Scale
    Zheng, Chenguang
    Chen, Hongzhi
    Cheng, Yuxuan
    Song, Zhezheng
    Wu, Yifan
    Li, Changji
    Cheng, James
    Yang, Hao
    Zhang, Shuai
    [J]. PROCEEDINGS OF THE VLDB ENDOWMENT, 2022, 15 (06): : 1228 - 1242