Optimizing quantum convolutional neural network architectures for arbitrary data dimension

被引:0
|
作者
Lee, Changwon [1 ]
Araujo, Israel F. [1 ]
Kim, Dongha [2 ]
Lee, Junghan [3 ]
Park, Siheon [4 ]
Ryu, Ju-Young [2 ,5 ]
Park, Daniel K. [1 ,6 ]
机构
[1] Yonsei Univ, Dept Stat & Data Sci, Seoul, South Korea
[2] Korea Adv Inst Sci & Technol KAIST, Sch Elect Engn, Daejeon, South Korea
[3] Yonsei Univ, Dept Phys, Seoul, South Korea
[4] Seoul Natl Univ, Dept Phys & Astron, Seoul, South Korea
[5] Norma Inc, Quantum AI Team, Seoul, South Korea
[6] Yonsei Univ, Dept Appl Stat, Seoul, South Korea
来源
FRONTIERS IN PHYSICS | 2025年 / 13卷
基金
新加坡国家研究基金会;
关键词
quantum computing; quantum machine learning; machine learning; quantum circuit; quantum algorithm;
D O I
10.3389/fphy.2025.1529188
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Quantum convolutional neural networks (QCNNs) represent a promising approach in quantum machine learning, paving new directions for both quantum and classical data analysis. This approach is particularly attractive due to the absence of the barren plateau problem, a fundamental challenge in training quantum neural networks (QNNs), and its feasibility. However, a limitation arises when applying QCNNs to classical data. The network architecture is most natural when the number of input qubits is a power of two, as this number is reduced by a factor of two in each pooling layer. The number of input qubits determines the dimensions (i.e., the number of features) of the input data that can be processed, restricting the applicability of QCNN algorithms to real-world data. To address this issue, we propose a QCNN architecture capable of handling arbitrary input data dimensions while optimizing the allocation of quantum resources such as ancillary qubits and quantum gates. This optimization is not only important for minimizing computational resources, but also essential in noisy intermediate-scale quantum (NISQ) computing, as the size of the quantum circuits that can be executed reliably is limited. Through numerical simulations, we benchmarked the classification performance of various QCNN architectures across multiple datasets with arbitrary input data dimensions, including MNIST, Landsat satellite, Fashion-MNIST, and Ionosphere. The results validate that the proposed QCNN architecture achieves excellent classification performance while utilizing a minimal resource overhead, providing an optimal solution when reliable quantum computation is constrained by noise and imperfections.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Convolutional Neural Network Architectures for Matching Natural Language Sentences
    Hu, Baotian
    Lu, Zhengdong
    Li, Hang
    Chen, Qingcai
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 27 (NIPS 2014), 2014, 27
  • [32] Evaluation of Convolutional Neural Network Architectures for Detecting Drowsiness in Drivers
    Cruz, Mario Aquino
    Delgado, Bryan Hurtado
    Guillen, Marycielo Xiomara Oscco
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2025, 16 (02) : 167 - 174
  • [33] A Genetic Programming Approach to Designing Convolutional Neural Network Architectures
    Suganuma, Masanori
    Shirakawa, Shinichi
    Nagao, Tomoharu
    PROCEEDINGS OF THE 2017 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE (GECCO'17), 2017, : 497 - 504
  • [34] One dimensional convolutional neural network architectures for wind prediction
    Harbola, Shubhi
    Coors, Volker
    ENERGY CONVERSION AND MANAGEMENT, 2019, 195 : 70 - 75
  • [35] Comparison of Convolutional Neural Network Architectures for Face Mask Detection
    Yahya, Siti Nadia
    Nordin, Muhammad Noor
    Ramli, Aizat Faiz
    Basarudin, Hafiz
    Abu, Mohd Azlan
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2021, 12 (12) : 667 - 677
  • [36] EARTH OBSERVATION DATA CLASSIFICATION WITH QUANTUM-CLASSICAL CONVOLUTIONAL NEURAL NETWORK
    Fan, Fan
    Shi, Yilei
    Zhu, Xiao Xiang
    2022 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2022), 2022, : 191 - 194
  • [37] A Genetic Programming Approach to Designing Convolutional Neural Network Architectures
    Suganuma, Masanori
    Shirakawa, Shinichi
    Nagao, Tomoharu
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 5369 - 5373
  • [38] Comparison of Deep Convolutional Neural Network Architectures for Fruit Categorization
    Kerta, Johan Muliadi
    Rangkuti, Abdul Haris
    Tantio, Jeremy
    INTERNATIONAL JOURNAL OF ELECTRICAL AND COMPUTER ENGINEERING SYSTEMS, 2024, 15 (03) : 247 - 259
  • [39] Synthesis of Convolutional Neural Network architectures for biomedical image classification
    Berezsky, Oleh
    Liashchynskyi, Petro
    Pitsun, Oleh
    Izonin, Ivan
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 95
  • [40] Optimizing deep neural network architectures for renewable energy forecasting
    Khan, Sunawar
    Mazhar, Tehseen
    Shahzad, Tariq
    Waheed, Wajahat
    Waheed, Ahsen
    Saeed, Mamoon M.
    Hamam, Habib
    DISCOVER SUSTAINABILITY, 2024, 5 (01):