Training deep quantum neural networks

被引:373
|
作者
Beer, Kerstin [1 ]
Bondarenko, Dmytro [1 ]
Farrelly, Terry [1 ,2 ]
Osborne, Tobias J. [1 ]
Salzmann, Robert [1 ,3 ]
Scheiermann, Daniel [1 ]
Wolf, Ramona [1 ]
机构
[1] Leibniz Univ Hannover, Inst Theoret Phys, Appelstr 2, D-30167 Hannover, Germany
[2] Univ Queensland, Sch Math & Phys, ARC Ctr Engn Quantum Syst, Brisbane, Qld 4072, Australia
[3] Univ Cambridge, Dept Appl Math & Theoret Phys, Cambridge CB3 0WA, England
基金
澳大利亚研究理事会;
关键词
PERCEPTRON;
D O I
10.1038/s41467-020-14454-2
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Neural networks enjoy widespread success in both research and industry and, with the advent of quantum technology, it is a crucial challenge to design quantum neural networks for fully quantum learning tasks. Here we propose a truly quantum analogue of classical neurons, which form quantum feedforward neural networks capable of universal quantum computation. We describe the efficient training of these networks using the fidelity as a cost function, providing both classical and efficient quantum implementations. Our method allows for fast optimisation with reduced memory requirements: the number of qudits required scales with only the width, allowing deep-network optimisation. We benchmark our proposal for the quantum task of learning an unknown unitary and find remarkable generalisation behaviour and a striking robustness to noisy training data.
引用
收藏
页数:6
相关论文
共 50 条
  • [41] ON TRAINING DEEP NEURAL NETWORKS USING A STREAMING APPROACH
    Duda, Piotr
    Jaworski, Maciej
    Cader, Andrzej
    Wang, Lipo
    JOURNAL OF ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING RESEARCH, 2020, 10 (01) : 15 - 26
  • [42] SEQUENCE TRAINING AND ADAPTATION OF HIGHWAY DEEP NEURAL NETWORKS
    Lu, Liang
    2016 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY (SLT 2016), 2016, : 461 - 466
  • [43] Partial data permutation for training deep neural networks
    Cong, Guojing
    Zhang, Li
    Yang, Chih-Chieh
    2020 20TH IEEE/ACM INTERNATIONAL SYMPOSIUM ON CLUSTER, CLOUD AND INTERNET COMPUTING (CCGRID 2020), 2020, : 728 - 735
  • [44] Disentangling feature and lazy training in deep neural networks
    Geiger, Mario
    Spigler, Stefano
    Jacot, Arthur
    Wyart, Matthieu
    JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT, 2020, 2020 (11):
  • [45] Training Deep Spiking Neural Networks Using Backpropagation
    Lee, Jun Haeng
    Delbruck, Tobi
    Pfeiffer, Michael
    FRONTIERS IN NEUROSCIENCE, 2016, 10
  • [46] Decentralized trustless gossip training of deep neural networks
    Sajina, Robert
    Tankovic, Nikola
    Etinger, Darko
    2020 43RD INTERNATIONAL CONVENTION ON INFORMATION, COMMUNICATION AND ELECTRONIC TECHNOLOGY (MIPRO 2020), 2020, : 1080 - 1084
  • [47] Accelerating Training for Distributed Deep Neural Networks in MapReduce
    Xu, Jie
    Wang, Jingyu
    Qi, Qi
    Sun, Haifeng
    Liao, Jianxin
    WEB SERVICES - ICWS 2018, 2018, 10966 : 181 - 195
  • [48] Partial Differential Equations for Training Deep Neural Networks
    Chaudhari, Pratik
    Oberman, Adam
    Osher, Stanley
    Soatto, Stefano
    Carlier, Guillaume
    2017 FIFTY-FIRST ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2017, : 1627 - 1631
  • [49] Relating Information Complexity and Training in Deep Neural Networks
    Gain, Alex
    Siegelmann, Hava
    MICRO- AND NANOTECHNOLOGY SENSORS, SYSTEMS, AND APPLICATIONS XI, 2019, 10982
  • [50] AutoAssist: A Framework to Accelerate Training of Deep Neural Networks
    Zhang, Jiong
    Yu, Hsiang-Fu
    Dhillon, Inderjit S.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32