Optimizing quantum convolutional neural network architectures for arbitrary data dimension

被引:0
|
作者
Lee, Changwon [1 ]
Araujo, Israel F. [1 ]
Kim, Dongha [2 ]
Lee, Junghan [3 ]
Park, Siheon [4 ]
Ryu, Ju-Young [2 ,5 ]
Park, Daniel K. [1 ,6 ]
机构
[1] Yonsei Univ, Dept Stat & Data Sci, Seoul, South Korea
[2] Korea Adv Inst Sci & Technol KAIST, Sch Elect Engn, Daejeon, South Korea
[3] Yonsei Univ, Dept Phys, Seoul, South Korea
[4] Seoul Natl Univ, Dept Phys & Astron, Seoul, South Korea
[5] Norma Inc, Quantum AI Team, Seoul, South Korea
[6] Yonsei Univ, Dept Appl Stat, Seoul, South Korea
来源
FRONTIERS IN PHYSICS | 2025年 / 13卷
基金
新加坡国家研究基金会;
关键词
quantum computing; quantum machine learning; machine learning; quantum circuit; quantum algorithm;
D O I
10.3389/fphy.2025.1529188
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Quantum convolutional neural networks (QCNNs) represent a promising approach in quantum machine learning, paving new directions for both quantum and classical data analysis. This approach is particularly attractive due to the absence of the barren plateau problem, a fundamental challenge in training quantum neural networks (QNNs), and its feasibility. However, a limitation arises when applying QCNNs to classical data. The network architecture is most natural when the number of input qubits is a power of two, as this number is reduced by a factor of two in each pooling layer. The number of input qubits determines the dimensions (i.e., the number of features) of the input data that can be processed, restricting the applicability of QCNN algorithms to real-world data. To address this issue, we propose a QCNN architecture capable of handling arbitrary input data dimensions while optimizing the allocation of quantum resources such as ancillary qubits and quantum gates. This optimization is not only important for minimizing computational resources, but also essential in noisy intermediate-scale quantum (NISQ) computing, as the size of the quantum circuits that can be executed reliably is limited. Through numerical simulations, we benchmarked the classification performance of various QCNN architectures across multiple datasets with arbitrary input data dimensions, including MNIST, Landsat satellite, Fashion-MNIST, and Ionosphere. The results validate that the proposed QCNN architecture achieves excellent classification performance while utilizing a minimal resource overhead, providing an optimal solution when reliable quantum computation is constrained by noise and imperfections.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Optimizing Convolutional Neural Network Architectures
    Balderas, Luis
    Lastra, Miguel
    Benitez, Jose M.
    MATHEMATICS, 2024, 12 (19)
  • [2] Should You Go Deeper? Optimizing Convolutional Neural Network Architectures without Training
    Richter, Mats L.
    Schoening, Julius
    Wiedenroth, Anna
    Krumnack, Ulf
    20TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2021), 2021, : 964 - 971
  • [3] Quantum convolutional neural network for classical data classification
    Tak Hur
    Leeseok Kim
    Daniel K. Park
    Quantum Machine Intelligence, 2022, 4
  • [4] Quantum convolutional neural network for classical data classification
    Hur, Tak
    Kim, Leeseok
    Park, Daniel K.
    QUANTUM MACHINE INTELLIGENCE, 2022, 4 (01)
  • [5] Optimizing Convolutional Neural Network on DSP
    Jagannathan, Shyam
    Mody, Mihir
    Mathew, Manu
    2016 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS (ICCE), 2016,
  • [6] Bandwidth Efficient Architectures for Convolutional Neural Network
    Wang, Jichen
    Lin, Jun
    Wang, Zhongfeng
    PROCEEDINGS OF THE 2018 IEEE INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS), 2018, : 94 - 99
  • [7] A review of convolutional neural network architectures and their optimizations
    Cong, Shuang
    Zhou, Yang
    ARTIFICIAL INTELLIGENCE REVIEW, 2023, 56 (03) : 1905 - 1969
  • [8] Efficient Convolution Architectures for Convolutional Neural Network
    Wang, Jichen
    Lin, Jun
    Wang, Zhongfeng
    2016 8TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS & SIGNAL PROCESSING (WCSP), 2016,
  • [9] A review of convolutional neural network architectures and their optimizations
    Shuang Cong
    Yang Zhou
    Artificial Intelligence Review, 2023, 56 : 1905 - 1969
  • [10] Performance Comparison of Data Classification based on Modern Convolutional Neural Network Architectures
    Tan, Yuchen
    Li, Yanxiang
    Liu, Han
    Lu, Wang
    Xiao, Xiaowu
    PROCEEDINGS OF THE 39TH CHINESE CONTROL CONFERENCE, 2020, : 815 - 818