Optimizing quantum convolutional neural network architectures for arbitrary data dimension

被引:0
|
作者
Lee, Changwon [1 ]
Araujo, Israel F. [1 ]
Kim, Dongha [2 ]
Lee, Junghan [3 ]
Park, Siheon [4 ]
Ryu, Ju-Young [2 ,5 ]
Park, Daniel K. [1 ,6 ]
机构
[1] Yonsei Univ, Dept Stat & Data Sci, Seoul, South Korea
[2] Korea Adv Inst Sci & Technol KAIST, Sch Elect Engn, Daejeon, South Korea
[3] Yonsei Univ, Dept Phys, Seoul, South Korea
[4] Seoul Natl Univ, Dept Phys & Astron, Seoul, South Korea
[5] Norma Inc, Quantum AI Team, Seoul, South Korea
[6] Yonsei Univ, Dept Appl Stat, Seoul, South Korea
来源
FRONTIERS IN PHYSICS | 2025年 / 13卷
基金
新加坡国家研究基金会;
关键词
quantum computing; quantum machine learning; machine learning; quantum circuit; quantum algorithm;
D O I
10.3389/fphy.2025.1529188
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Quantum convolutional neural networks (QCNNs) represent a promising approach in quantum machine learning, paving new directions for both quantum and classical data analysis. This approach is particularly attractive due to the absence of the barren plateau problem, a fundamental challenge in training quantum neural networks (QNNs), and its feasibility. However, a limitation arises when applying QCNNs to classical data. The network architecture is most natural when the number of input qubits is a power of two, as this number is reduced by a factor of two in each pooling layer. The number of input qubits determines the dimensions (i.e., the number of features) of the input data that can be processed, restricting the applicability of QCNN algorithms to real-world data. To address this issue, we propose a QCNN architecture capable of handling arbitrary input data dimensions while optimizing the allocation of quantum resources such as ancillary qubits and quantum gates. This optimization is not only important for minimizing computational resources, but also essential in noisy intermediate-scale quantum (NISQ) computing, as the size of the quantum circuits that can be executed reliably is limited. Through numerical simulations, we benchmarked the classification performance of various QCNN architectures across multiple datasets with arbitrary input data dimensions, including MNIST, Landsat satellite, Fashion-MNIST, and Ionosphere. The results validate that the proposed QCNN architecture achieves excellent classification performance while utilizing a minimal resource overhead, providing an optimal solution when reliable quantum computation is constrained by noise and imperfections.
引用
收藏
页数:13
相关论文
共 50 条
  • [11] Optimizing Deep Neural Network Architectures: an overview
    Bouzar-Benlabiod, Lydia
    Rubin, Stuart H.
    Benaida, Amel
    2021 IEEE 22ND INTERNATIONAL CONFERENCE ON INFORMATION REUSE AND INTEGRATION FOR DATA SCIENCE (IRI 2021), 2021, : 25 - 32
  • [12] Optimizing Weight Mapping and Data Flow for Convolutional Neural Networks on Processing-in-Memory Architectures
    Peng, Xiaochen
    Liu, Rui
    Yu, Shimeng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2020, 67 (04) : 1333 - 1343
  • [13] Arbitrary surface data patching method based on geometric convolutional neural network
    Fan, Linyuan
    Ji, Dandan
    Lin, Peng
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (12): : 8763 - 8774
  • [14] Arbitrary surface data patching method based on geometric convolutional neural network
    Linyuan Fan
    Dandan Ji
    Peng Lin
    Neural Computing and Applications, 2023, 35 : 8763 - 8774
  • [15] Deep Convolutional Neural Network Compression based on the Intrinsic Dimension of the Training Data
    Hadi, Abir Mohammad
    Won, Kwanghee
    APPLIED COMPUTING REVIEW, 2024, 24 (01): : 14 - 23
  • [16] Convolutional Neural Network Architectures for Signals Supported on Graphs
    Gama, Fernando
    Marques, Antonio G.
    Leus, Geert
    Ribeiro, Alejandro
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2019, 67 (04) : 1034 - 1049
  • [17] Automated Search for Configurations of Convolutional Neural Network Architectures
    Ghamizi, Salah
    Cordy, Maxime
    Papadakis, Mike
    Le Traon, Yves
    SPLC'19: PROCEEDINGS OF THE 23RD INTERNATIONAL SYSTEMS AND SOFTWARE PRODUCT LINE CONFERENCE, VOL A, 2020, : 119 - 130
  • [18] Efficient Fast Convolution Architectures for Convolutional Neural Network
    Xu, Weihong
    Wang, Zhongfeng
    You, Xiaohu
    Zhang, Chuan
    2017 IEEE 12TH INTERNATIONAL CONFERENCE ON ASIC (ASICON), 2017, : 904 - 907
  • [19] Efficient Hardware Architectures for Deep Convolutional Neural Network
    Wang, Jichen
    Lin, Jun
    Wang, Zhongfeng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2018, 65 (06) : 1941 - 1953
  • [20] Comparison of Convolutional Neural Network Architectures on Dermastopic Imagery
    Chabala, William F.
    Jouny, Ismail
    2020 11TH IEEE ANNUAL UBIQUITOUS COMPUTING, ELECTRONICS & MOBILE COMMUNICATION CONFERENCE (UEMCON), 2020, : 928 - 931