Class-specific differential detection in diffractive optical neural networks improves inference accuracy

被引:0
|
作者
Jingxi Li [1 ,2 ,3 ]
Deniz Mengu [1 ,2 ,3 ]
Yi Luo [1 ,2 ,3 ]
Yair Rivenson [1 ,2 ,3 ]
Aydogan Ozcan [1 ,2 ,3 ]
机构
[1] 不详
[2] University of California at Los Angeles, Department of Electrical and Computer Engineering
[3] 不详
[4] University of California at Los Angeles, Department of Bioengineering
[5] University of California at Los Angeles, California Nano Systems Institute
[6] 不详
关键词
D O I
暂无
中图分类号
TP183 [人工神经网络与计算]; O436.1 [干涉与衍射];
学科分类号
摘要
Optical computing provides unique opportunities in terms of parallelization, scalability, power efficiency, and computational speed and has attracted major interest for machine learning. Diffractive deep neural networks have been introduced earlier as an optical machine learning framework that uses task-specific diffractive surfaces designed by deep learning to all-optically perform inference, achieving promising performance for object classification and imaging. We demonstrate systematic improvements in diffractive optical neural networks, based on a differential measurement technique that mitigates the strict nonnegativity constraint of light intensity. In this differential detection scheme, each class is assigned to a separate pair of detectors, behind a diffractive optical network, and the class inference is made by maximizing the normalized signal difference between the photodetector pairs. Using this differential detection scheme, involving 10 photodetector pairs behind 5 diffractive layers with a total of 0.2 million neurons, we numerically achieved blind testing accuracies of 98.54%, 90.54%, and 48.51% for MNIST, Fashion-MNIST, and grayscale CIFAR-10 datasets, respectively. Moreover, by utilizing the inherent parallelization capability of optical systems, we reduced the cross-talk and optical signal coupling between the positive and negative detectors of each class by dividing the optical path into two jointly trained diffractive neural networks that work in parallel. We further made use of this parallelization approach and divided individual classes in a target dataset among multiple jointly trained diffractive neural networks. Using this class-specific differential detection in jointly optimized diffractive neural networks that operate in parallel, our simulations achieved blind testing accuracies of 98.52%, 91.48%, and 50.82% for MNIST, Fashion-MNIST, and grayscale CIFAR-10 datasets, respectively, coming close to the performance of some of the earlier generations of all-electronic deep neural networks, e.g., Le Net, which achieves classification accuracies of98.77%, 90.27%, and 55.21% corresponding to the same datasets, respectively. In addition to these jointly optimized diffractive neural networks, we also independently optimized multiple diffractive networks and utilized them in a way that is similar to ensemble methods practiced in machine learning; using 3 independently optimized differential diffractive neural networks that optically project their light onto a common output/detector plane, we numerically achieved blind testing accuracies of 98.59%, 91.06%, and 51.44% for MNIST, Fashion-MNIST, and grayscale CIFAR-10 datasets, respectively. Through these systematic advances in designing diffractive neural networks, the reported classification accuracies set the state of the art for all-optical neural network design. The presented framework might be useful to bring optical neural network-based low power solutions for various machine learning applications and help us design new computational cameras that are task-specific.
引用
收藏
页码:5 / 17
页数:13
相关论文
共 50 条
  • [21] Learning class-specific edges for object detection and segmentation
    Prasad, Mukta
    Zisserman, Andrew
    Fitzgibbon, Andrew
    Kumar, M. Pawan
    Torr, P. H. S.
    COMPUTER VISION, GRAPHICS AND IMAGE PROCESSING, PROCEEDINGS, 2006, 4338 : 94 - +
  • [22] Design of task-specific optical systems using broadband diffractive neural networks
    Yi Luo
    Deniz Mengu
    Nezih T. Yardimci
    Yair Rivenson
    Muhammed Veli
    Mona Jarrahi
    Aydogan Ozcan
    Light: Science & Applications, 8
  • [23] Design of task-specific optical systems using broadband diffractive neural networks
    Luo, Yi
    Mengu, Deniz
    Yardimci, Nezih T.
    Rivenson, Yair
    Veli, Muhammed
    Jarrahi, Mona
    Ozcan, Aydogan
    LIGHT-SCIENCE & APPLICATIONS, 2019, 8 (1)
  • [24] In situ optical backpropagation training of diffractive optical neural networks
    Zhou, Tiankuang
    Fang, Lu
    Yan, Tao
    Wu, Jiamin
    Li, Yipeng
    Fan, Jingtao
    Wu, Huaqiang
    Lin, Xing
    Dai, Qionghai
    PHOTONICS RESEARCH, 2020, 8 (06) : 940 - 953
  • [25] In situ optical backpropagation training of diffractive optical neural networks
    TIANKUANG ZHOU
    LU FANG
    TAO YAN
    JIAMIN WU
    YIPENG LI
    JINGTAO FAN
    HUAQIANG WU
    XING LIN
    QIONGHAI DAI
    Photonics Research, 2020, 8 (06) : 940 - 953
  • [26] In situ optical backpropagation training of diffractive optical neural networks
    TIANKUANG ZHOU
    LU FANG
    TAO YAN
    JIAMIN WU
    YIPENG LI
    JINGTAO FAN
    HUAQIANG WU
    XING LIN
    QIONGHAI DAI
    Photonics Research , 2020, (06) : 940 - 953
  • [27] Class-specific quality of service guarantees in multimedia communication networks
    Paschalidis, IC
    AUTOMATICA, 1999, 35 (12) : 1951 - 1968
  • [28] Role of spatial coherence in diffractive optical neural networks
    Filipovich, M. atthew j.
    Malyshev, A. leksei
    Lvovsky, A. I.
    OPTICS EXPRESS, 2024, 32 (13): : 22986 - 22997
  • [29] Finding Interpretable Class-Specific Patterns through Efficient Neural Search
    Walter, Nils Philipp
    Fischer, Jonas
    Vreeken, Jilles
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 8, 2024, : 9062 - 9070
  • [30] COMBINING GENERIC AND CLASS-SPECIFIC CODEBOOKS FOR OBJECT CATEGORIZATION AND DETECTION
    Pan, Hong
    Zhu, YaPing
    Xia, LiangZheng
    Truong Q. Nguyen
    2011 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2011, : 2264 - 2267