Photonic Convolutional Neural Network Accelerator Assisted by Phase Change Material

被引:1
|
作者
Guo Pengxing [1 ,2 ]
Liu Zhiyuan [1 ,2 ]
Hou Weigang [1 ,2 ]
Guo Lei [1 ,2 ]
机构
[1] Chongqing Univ Posts & Telecommun, Sch Commun & Informat Engn, Chongqing 400065, Peoples R China
[2] Chongqing Univ Posts & Telecommun, Inst Intelligent Commun & Network Secur, Chongqing 400065, Peoples R China
关键词
machine vision; photonic convolutional neural network accelerator; micro-ring resonator; phase change material; in-memory computing;
D O I
10.3788/AOS221329
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
Objective The convolutional neural network (CNN) has achieved great success in computer vision and image and speech processing due to its high recognition accuracy. This success cannot be separated from the support of the hardware accelerator. However, the rapid development of artificial intelligence has led to a dramatic increase in the amount of data, which places stricter requirements on the computing power of hardware accelerators. Limited by the power and speed of electronic devices, traditional electronic accelerators can hardly meet the requirements of hardware computing power and energy consumption for large-scale computing operations. As an alternative, micro-ring resonator (MRR) and Mach-Zehnder interferometer (MZI)-based silicon photonic accelerators provide an effective solution to the problem faced by electronic accelerators. However, prior photonic accelerators need to read the weights from the external memory when performing the multiply-accumulate operation and mapping each value to the bias voltage of the MRR or MZI units, which increases the area and energy consumption. To solve the above problems, this paper proposes a nonvolatile silicon photonic convolutional neural network (NVSP-CNN) accelerator. This structure uses the Add-Drop MRR and nonvolatile phase change material Ge2Sb2Te5 (GST) to realize optical in-memory computing, which helps improve energy efficiency and computing density. Methods Firstly, we design a photonic dot-product engine on the basis of GST and the Add-Drop MRR (Fig. 2). The GST is embedded on the top of the MRR, and its different crystallization degrees are used to change the refractive index of the MRR, which makes the output power of the Through and Drop ports change. The crystallization degree of GST is modulated outside the chip, and the light pulse increases the internal temperature of GST to change the crystallization degree. It is then cooled rapidly so that the crystallization state is preserved. This value remains unchanged for a long time without external current. During computational operations, a short and low-power optical pulse is injected from the MRR's input port and output from the Drop and Through ports. The output optical power is converted to electric power through a balanced photodiode, and T-d - T-p is realized in the meantime. Therefore, the values of T-d - T-p under different GST phase states can be used as the weight values in the neural network (Fig. 3). Then, we propose an optical matrix multiplier combined with wavelength division multiplexing (WDM) technology and the GST-MRR-based photonic dot-product engine (Fig. 4). Finally, the optical matrix multiplier is combined with the nonlinear parts (activation, pooling, and full connectivity) to build a complete accelerator, i. e., the NVSP-CNN accelerator (Fig. 5). In NVSP-CNN, the convolution operation is implemented optically, and the nonlinear part is realized electrically. Results and Discussions As a proof of concept, a 4x4 optical matrix multiplication with 10 Gb/s and 20 Gb/s data rates is verified by the simulation platform Ansys Lumerical. Four wavelengths are used as the input pulse, which is a binary sequence composed of 0 or 1. The output value obtained by optical simulation has a high fit with the theoretical calculation value ( Fig. 6). Then, the NVSP-CNN is compared with the DEAP-CNN structure in terms of the rate, area, power consumption, and accuracy. Similar to the case of DEAP-CNN, the computing rate of the NVSP structure is limited by the digital-to-analog converter (DAC) modulation rate. The highest operation rate can reach 5 GSa/s, which is faster than the operation rate of the mainstream GPU. Compared with DEAP-CNN, the proposed accelerator structure can reduce power consumption by 48. 75% while maintaining the original operation speed, and the area at the matrix operation can be reduced by 49. 75%. Finally, the simulations on the MNIST and notMNIST datasets are performed, and inference accuracies of 97. 80% and 92. 45% are achieved, respectively. The recognition results show that the accelerator structure can complete most image recognition tasks in life. Conclusions This paper proposes an MRR and GST- based photonic CNN accelerator structure for in-memory computing. Unlike the traditional MRR-based accelerator, the NVSP-CNN accelerator can avoid the power loss caused by the continuous external power supply for state maintenance and does not require external electrical pads for modulation.
引用
收藏
页数:10
相关论文
共 28 条
  • [1] Scalable FPGA Accelerator for Deep Convolutional Neural Networks with Stochastic Streaming
    Alawad, Mohammed
    Lin, Mingjie
    [J]. IEEE TRANSACTIONS ON MULTI-SCALE COMPUTING SYSTEMS, 2018, 4 (04): : 888 - 899
  • [2] Digital Electronics and Analog Photonics for Convolutional Neural Networks (DEAP-CNNs)
    Bangari, Viraj
    Marquez, Bicky A.
    Miller, Heidi B.
    Tait, Alexander N.
    Nahmias, Mitchell A.
    de Lima, Thomas Ferreira
    Peng, Hsuan-Tung
    Prucnal, Paul R.
    Shastri, Bhavin J.
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN QUANTUM ELECTRONICS, 2020, 26 (01)
  • [3] Toward Fast Neural Computing using All-Photonic Phase Change Spiking Neurons
    Chakraborty, Indranil
    Saha, Gobinda
    Sengupta, Abhronil
    Roy, Kaushik
    [J]. SCIENTIFIC REPORTS, 2018, 8
  • [4] Research Progress in the Applications of Convolutional Neural Networks in Optical Information Processing
    Di Jianglei
    Tang Ju
    Wu Ji
    Wang Kaiqiang
    Ren Zhenbo
    Zhang Mengmeng
    Zhao Jianlin
    [J]. LASER & OPTOELECTRONICS PROGRESS, 2021, 58 (16)
  • [5] Design of optical neural networks with component imprecisions
    Fang, Michael Y-S
    Manipatruni, Sasikanth
    Wierzynski, Casimir
    Khosrowshahi, Amir
    DeWeese, Michael R.
    [J]. OPTICS EXPRESS, 2019, 27 (10): : 14009 - 14029
  • [6] Parallel convolutional processing using an integrated photonic tensor core
    Feldmann, J.
    Youngblood, N.
    Karpov, M.
    Gehring, H.
    Li, X.
    Stappers, M.
    Le Gallo, M.
    Fu, X.
    Lukashchuk, A.
    Raja, A. S.
    Liu, J.
    Wright, C. D.
    Sebastian, A.
    Kippenberg, T. J.
    Pernice, W. H. P.
    Bhaskaran, H.
    [J]. NATURE, 2021, 589 (7840) : 52 - +
  • [7] All-optical spiking neurosynaptic networks with self-learning capabilities
    Feldmann, J.
    Youngblood, N.
    Wright, C. D.
    Bhaskaran, H.
    Pernice, W. H. P.
    [J]. NATURE, 2019, 569 (7755) : 208 - +
  • [8] Phase-Change Material Based Photonic Digital-to-Analog Converter for Arbitrary Waveform Generation
    Guo Pengxing
    Zhao Peng
    Hou Weigang
    Guo Lei
    [J]. ACTA OPTICA SINICA, 2022, 42 (15)
  • [9] Potential Threats and Possible Countermeasures for Photonic Network-on-Chip
    Guo, Pengxing
    Hou, Weigang
    Guo, Lei
    Cao, Zizheng
    Ning, Zhaolong
    [J]. IEEE COMMUNICATIONS MAGAZINE, 2020, 58 (09) : 48 - 53
  • [10] O-Star: An Optical Switching Architecture Featuring Mode and Wavelength-Division Multiplexing for On-Chip Many-Core Systems
    Hou, Weigang
    Guo, Pengxing
    Guo, Lei
    Zhang, Xu
    Chen, Hui
    Liu, Weichen
    [J]. JOURNAL OF LIGHTWAVE TECHNOLOGY, 2022, 40 (01) : 24 - 36