Zero-power optical convolutional neural network using incoherent light

被引:3
|
作者
Fei, Yuhang [1 ]
Sui, Xiubao [1 ]
Gu, Guohua [1 ]
Chen, Qian [1 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Elect & Opt Engn, Nanjing 210094, Peoples R China
基金
中国国家自然科学基金;
关键词
Optical neural network; Irregular convolution; Incoherent light; Zero-power consumption;
D O I
10.1016/j.optlaseng.2022.107410
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
As a new high-speed intelligent computing method, optical neural network (ONN) has the advantage of realizing lower or even zero power computing. For example, the diffractive neural networks based on passive diffraction layer can work without power consumption. However, in the realization methods of current ONN, coherent light is used to carry input information, while the light in the natural scene is incoherent light, which makes it difficult to apply ONN to the actual optical scene for physical observation. In this paper, we propose an irregular incoherent optical convolutional neural network (I2OCNN). The network only uses the reflection and transmission properties of light to realize the controllable rearrangement of two-dimensional incoherent light field on a series of passive optical devices, so as to realize the cross interconnection of optical neurons, and thus realizes the optical convolution neural network of incoherent light with zero power consumption. Since the architecture is based on intensity modulation and incoherent superposition, incoherent light can be used to carry input signals, which solves the problem of practical application of ONN. In addition, the network can achieve irregular convolution. MNIST and Fashion MNIST were used to verify the image recognition capability of 7-layer I2OCNN, and the testing accuracy was 86.95% and 75.68%, respectively. Theoretical reasoning and simulation results show that this architecture can complete the basic image recognition tasks under the condition of incoherence and zero power consumption.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] An unsupervised convolutional neural network method for estimation of intravoxel incoherent motion parameters
    Huang, Hsuan-Ming
    PHYSICS IN MEDICINE AND BIOLOGY, 2022, 67 (21):
  • [22] Using convolutional neural network models illumination estimation according to light colors
    Buyukarikan, Birkan
    Ulker, Erkan
    OPTIK, 2022, 271
  • [23] Unsupervised Depth Estimation from Light Field Using a Convolutional Neural Network
    Peng, Jiayong
    Xiong, Zhiwei
    Liu, Dong
    Chen, Xuejin
    2018 INTERNATIONAL CONFERENCE ON 3D VISION (3DV), 2018, : 295 - 303
  • [24] Positioning and focusing of light sources in light concentrating systems using convolutional neural network modelling
    Haseler, Daniel
    Ali, Arshad M.
    Kakosimos, Konstantinos E.
    SOLAR ENERGY, 2021, 218 : 445 - 454
  • [25] Fast Light Field Angular Resolution Enhancement Using Convolutional Neural Network
    Wang, Xingzheng
    You, Senlin
    Zan, Yongqiang
    Deng, Yuanlong
    IEEE ACCESS, 2021, 9 : 30216 - 30224
  • [26] Using convolutional neural network models illumination estimation according to light colors
    Büyükarıkan, Birkan
    Ülker, Erkan
    Optik, 2022, 271
  • [27] Super-Resolution Using a Light Inception Layer in Convolutional Neural Network
    Mou, Qinyang
    Guo, Jun
    NINTH INTERNATIONAL CONFERENCE ON GRAPHIC AND IMAGE PROCESSING (ICGIP 2017), 2018, 10615
  • [28] Light Field Angular Super-Resolution using Convolutional Neural Network with Residual Network
    Kim, Dong-Myung
    Kang, Hyun-Soo
    Hong, Jang-Eui
    Suh, Jae-Won
    2019 ELEVENTH INTERNATIONAL CONFERENCE ON UBIQUITOUS AND FUTURE NETWORKS (ICUFN 2019), 2019, : 595 - 597
  • [29] 3D Heterogeneous and Flexible Package Integration for Zero-Power Wireless Neural Recording
    Sayeed, Sk Yeahia Been
    Venkatakrishnan, Satheesh Bojja
    Monshi, Md Monirojjaman
    Abdulhameed, Abdal
    Volakis, John L.
    Raj, P. M.
    2020 IEEE 70TH ELECTRONIC COMPONENTS AND TECHNOLOGY CONFERENCE (ECTC 2020), 2020, : 1003 - 1009
  • [30] Lipreading using Convolutional Neural Network
    Noda, Kuniaki
    Yamaguchi, Yuki
    Nakadai, Kazuhiro
    Okuno, Hiroshi G.
    Ogata, Tetsuya
    15TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2014), VOLS 1-4, 2014, : 1149 - 1153