Optimizing quantum convolutional neural network architectures for arbitrary data dimension

被引:0
|
作者
Lee, Changwon [1 ]
Araujo, Israel F. [1 ]
Kim, Dongha [2 ]
Lee, Junghan [3 ]
Park, Siheon [4 ]
Ryu, Ju-Young [2 ,5 ]
Park, Daniel K. [1 ,6 ]
机构
[1] Yonsei Univ, Dept Stat & Data Sci, Seoul, South Korea
[2] Korea Adv Inst Sci & Technol KAIST, Sch Elect Engn, Daejeon, South Korea
[3] Yonsei Univ, Dept Phys, Seoul, South Korea
[4] Seoul Natl Univ, Dept Phys & Astron, Seoul, South Korea
[5] Norma Inc, Quantum AI Team, Seoul, South Korea
[6] Yonsei Univ, Dept Appl Stat, Seoul, South Korea
来源
FRONTIERS IN PHYSICS | 2025年 / 13卷
基金
新加坡国家研究基金会;
关键词
quantum computing; quantum machine learning; machine learning; quantum circuit; quantum algorithm;
D O I
10.3389/fphy.2025.1529188
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Quantum convolutional neural networks (QCNNs) represent a promising approach in quantum machine learning, paving new directions for both quantum and classical data analysis. This approach is particularly attractive due to the absence of the barren plateau problem, a fundamental challenge in training quantum neural networks (QNNs), and its feasibility. However, a limitation arises when applying QCNNs to classical data. The network architecture is most natural when the number of input qubits is a power of two, as this number is reduced by a factor of two in each pooling layer. The number of input qubits determines the dimensions (i.e., the number of features) of the input data that can be processed, restricting the applicability of QCNN algorithms to real-world data. To address this issue, we propose a QCNN architecture capable of handling arbitrary input data dimensions while optimizing the allocation of quantum resources such as ancillary qubits and quantum gates. This optimization is not only important for minimizing computational resources, but also essential in noisy intermediate-scale quantum (NISQ) computing, as the size of the quantum circuits that can be executed reliably is limited. Through numerical simulations, we benchmarked the classification performance of various QCNN architectures across multiple datasets with arbitrary input data dimensions, including MNIST, Landsat satellite, Fashion-MNIST, and Ionosphere. The results validate that the proposed QCNN architecture achieves excellent classification performance while utilizing a minimal resource overhead, providing an optimal solution when reliable quantum computation is constrained by noise and imperfections.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Using Data Compression for Optimizing FPGA-Based Convolutional Neural Network Accelerators
    Guan, Yijin
    Xu, Ningyi
    Zhang, Chen
    Yuan, Zhihang
    Cong, Jason
    ADVANCED PARALLEL PROCESSING TECHNOLOGIES, 2017, 10561 : 14 - 26
  • [22] Scrambling ability of quantum neural network architectures
    Wu, Yadong
    Zhang, Pengfei
    Zhai, Hui
    PHYSICAL REVIEW RESEARCH, 2021, 3 (03):
  • [23] Design of a quantum convolutional neural network on quantum circuits
    Zheng, Jin
    Gao, Qing
    Lu, Jinhu
    Ogorzalek, Maciej
    Pan, Yu
    Lu, Yanxuan
    JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS, 2023, 360 (17): : 13761 - 13777
  • [24] Quantum artificial neural network architectures and components
    Narayanan, A
    Menneer, T
    INFORMATION SCIENCES, 2000, 128 (3-4) : 231 - 255
  • [25] Data Dropout: Optimizing Training Data for Convolutional Neural Networks
    Wang, Tianyang
    Huan, Jun
    Li, Bo
    2018 IEEE 30TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI), 2018, : 39 - 46
  • [26] Deep Convolutional and LSTM Neural Network Architectures on Leap Motion Hand Tracking Data Sequences
    Kritsis, Kosmas
    Kaliakatsos-Papakostas, Maximos
    Katsouros, Vassilis
    Pikrakis, Aggelos
    2019 27TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2019,
  • [27] Evaluation of Convolutional Neural Network Architectures for Chart Image Classification
    Chagas, Paulo
    Akiyama, Rafael
    Meiguins, Aruanda
    Santos, Carlos
    Saraiva, Filipe
    Meiguins, Bianchi
    Morais, Jefferson
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [28] Convolutional Neural Network Architectures for the Automated Diagnosis of Celiac Disease
    Wimmer, G.
    Hegenbart, S.
    Vecsei, A.
    Uhl, A.
    COMPUTER-ASSISTED AND ROBOTIC ENDOSCOPY, 2017, 10170 : 104 - 113
  • [29] Exploring Convolutional Neural Network Architectures for EEG Feature Extraction
    Rakhmatulin, Ildar
    Dao, Minh-Son
    Nassibi, Amir
    Mandic, Danilo
    SENSORS, 2024, 24 (03)
  • [30] On the Use of Convolutional Neural Network Architectures for Facial Emotion Recognition
    Espinel, Andres
    Perez, Noel
    Riofrio, Daniel
    Benitez, Diego S.
    Flores Moyano, Ricardo
    APPLICATIONS OF COMPUTATIONAL INTELLIGENCE, COLCACI 2021, 2022, 1471 : 18 - 30