Coherent feed-forward quantum neural network

被引:0
|
作者
Singh, Utkarsh [1 ,2 ]
Goldberg, Aaron Z. [1 ,2 ]
Heshami, Khabat [1 ,2 ,3 ]
机构
[1] National Research Council of Canada, 100 Sussex Drive, Ottawa,ON,K1N 5A2, Canada
[2] Department of Physics, University of Ottawa, 25 Templeton Street, Ottawa,ON,K1N 6N5, Canada
[3] Institute for Quantum Science and Technology, Department of Physics and Astronomy, University of Calgary, 25 Templeton Street, Calgary,AB,T2N 1N4, Canada
关键词
Qubits;
D O I
10.1007/s42484-024-00222-8
中图分类号
学科分类号
摘要
Quantum machine learning, focusing on quantum neural networks (QNNs), remains a vastly uncharted field of study. Current QNN models primarily employ variational circuits on an ansatz or a quantum feature map, often requiring multiple entanglement layers. This methodology not only increases the computational cost of the circuit beyond what is practical on near-term quantum devices but also misleadingly labels these models as neural networks, given their divergence from the structure of a typical feed-forward neural network (FFNN). Moreover, the circuit depth and qubit needs of these models scale poorly with the number of data features, resulting in an efficiency challenge for real-world machine learning tasks. We introduce a bona fide QNN model, which seamlessly aligns with the versatility of a traditional FFNN in terms of its adaptable intermediate layers and nodes, absent from intermediate measurements such that our entire model is coherent. This model stands out with its reduced circuit depth and number of requisite CNOT gates, achieving a more than 50% reduction in both compared to prevailing QNN models. Furthermore, the qubit count in our model remains unaffected by the data’s feature quantity. We test our proposed model on various benchmarking datasets such as the breast cancer diagnostic (Wisconsin) and credit card fraud detection datasets. Our model achieved an accuracy of 91% on the breast cancer dataset and 85% on the credit card fraud detection dataset, outperforming existing QNN methods by 5–10% while requiring approximately 50% fewer quantum resources. These results showcase the advantageous efficacy of our approach, paving the way for the application of quantum neural networks to relevant real-world machine learning problems. © Crown 2024.
引用
收藏
相关论文
共 50 条
  • [1] Quantum implementation of an artificial feed-forward neural network
    Tacchino, Francesco
    Barkoutsos, Panagiotis
    Macchiavello, Chiara
    Tavernelli, Ivano
    Gerace, Dario
    Bajoni, Daniele
    [J]. QUANTUM SCIENCE AND TECHNOLOGY, 2020, 5 (04)
  • [2] Design of an Interval Feed-Forward Neural Network
    Srivastava, Smriti
    Singh, Madhusudan
    [J]. PROCEEDINGS OF THE 2012 FIFTH INTERNATIONAL CONFERENCE ON EMERGING TRENDS IN ENGINEERING AND TECHNOLOGY (ICETET 2012), 2012, : 211 - 215
  • [3] On the feed-forward neural network for analyzing pantograph equations
    Az-Zo'bi, Emad A.
    Shah, Rasool
    Alyousef, Haifa A.
    Tiofack, C. G. L.
    El-Tantawy, S. A.
    [J]. AIP ADVANCES, 2024, 14 (02)
  • [4] As experiment with feed-forward neural network for speech recognition
    Jelinek, B
    Juhar, J
    Cizmar, A
    [J]. STATE OF THE ART IN COMPUTATIONAL INTELLIGENCE, 2000, : 308 - 313
  • [5] An incremental learning preprocessor for feed-forward neural network
    Piyabute Fuangkhon
    [J]. Artificial Intelligence Review, 2014, 41 : 183 - 210
  • [6] Enhancing Feed-Forward Neural Network in Image Classification
    Daday, Mark Jovic A.
    Fajardo, Arnel C.
    Medina, Ruji P.
    [J]. 2019 2ND INTERNATIONAL CONFERENCE ON COMPUTING AND BIG DATA (ICCBD 2019), 2019, : 86 - 90
  • [7] Finding an Optimal Configuration of the Feed-forward Neural Network
    Strba, Radoslav
    Stolfa, Jakub
    Stolfa, Svatopluk
    [J]. INFORMATION MODELLING AND KNOWLEDGE BASES XXVII, 2016, 280 : 199 - 206
  • [8] An incremental learning preprocessor for feed-forward neural network
    Fuangkhon, Piyabute
    [J]. ARTIFICIAL INTELLIGENCE REVIEW, 2014, 41 (02) : 183 - 210
  • [9] Response analysis of feed-forward neural network predictors
    Varone, B
    Tanskanen, JMA
    Ovaska, SJ
    [J]. 1997 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOLS I - V: VOL I: PLENARY, EXPERT SUMMARIES, SPECIAL, AUDIO, UNDERWATER ACOUSTICS, VLSI; VOL II: SPEECH PROCESSING; VOL III: SPEECH PROCESSING, DIGITAL SIGNAL PROCESSING; VOL IV: MULTIDIMENSIONAL SIGNAL PROCESSING, NEURAL NETWORKS - VOL V: STATISTICAL SIGNAL AND ARRAY PROCESSING, APPLICATIONS, 1997, : 3309 - 3312
  • [10] A Feed-Forward Neural Network for Solving Stokes Problem
    M. Baymani
    S. Effati
    A. Kerayechian
    [J]. Acta Applicandae Mathematicae, 2011, 116