Quantum self-supervised learning

被引:17
|
作者
Jaderberg, B. [1 ]
Anderson, L. W. [1 ]
Xie, W. [2 ]
Albanie, S. [3 ]
Kiffner, M. [1 ,4 ]
Jaksch, D. [1 ,4 ,5 ]
机构
[1] Univ Oxford, Clarendon Lab, Parks Rd, Oxford OX1 3PU, England
[2] Univ Oxford, Dept Engn Sci, Visual Geometry Grp, Oxford, England
[3] Univ Cambridge, Dept Engn, Cambridge, England
[4] Natl Univ Singapore, Ctr Quantum Technol, 3 Sci Dr 2, Singapore 117543, Singapore
[5] Univ Hamburg, Inst Laserphys, D-22761 Hamburg, Germany
来源
QUANTUM SCIENCE AND TECHNOLOGY | 2022年 / 7卷 / 03期
基金
英国工程与自然科学研究理事会; 新加坡国家研究基金会;
关键词
variational quantum algorithms; quantum machine learning; self-supervised learning; deep learning; quantum neural networks; REPRESENTATION; ALGORITHM;
D O I
10.1088/2058-9565/ac6825
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
The resurgence of self-supervised learning, whereby a deep learning model generates its own supervisory signal from the data, promises a scalable way to tackle the dramatically increasing size of real-world data sets without human annotation. However, the staggering computational complexity of these methods is such that for state-of-the-art performance, classical hardware requirements represent a significant bottleneck to further progress. Here we take the first steps to understanding whether quantum neural networks (QNNs) could meet the demand for more powerful architectures and test its effectiveness in proof-of-principle hybrid experiments. Interestingly, we observe a numerical advantage for the learning of visual representations using small-scale QNN over equivalently structured classical networks, even when the quantum circuits are sampled with only 100 shots. Furthermore, we apply our best quantum model to classify unseen images on the ibmq_paris quantum computer and find that current noisy devices can already achieve equal accuracy to the equivalent classical model on downstream tasks.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] Self-supervised hypergraph structure learning
    Li, Mingyuan
    Yang, Yanlin
    Meng, Lei
    Peng, Lu
    Zhao, Haixing
    Ye, Zhonglin
    ARTIFICIAL INTELLIGENCE REVIEW, 2025, 58 (06)
  • [32] Adversarial Self-Supervised Contrastive Learning
    Kim, Minseon
    Tack, Jihoon
    Hwang, Sung Ju
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020), 2020, 33
  • [33] Conformal Credal Self-Supervised Learning
    Lienen, Julian
    Demir, Caglar
    Huellermeier, Eyke
    CONFORMAL AND PROBABILISTIC PREDICTION WITH APPLICATIONS, VOL 204, 2023, 204 : 213 - 232
  • [34] A Survey on Contrastive Self-Supervised Learning
    Jaiswal, Ashish
    Babu, Ashwin Ramesh
    Zadeh, Mohammad Zaki
    Banerjee, Debapriya
    Makedon, Fillia
    TECHNOLOGIES, 2021, 9 (01)
  • [35] Backdoor Attacks on Self-Supervised Learning
    Saha, Aniruddha
    Tejankar, Ajinkya
    Koohpayegani, Soroush Abbasi
    Pirsiavash, Hamed
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 13327 - 13336
  • [36] Mean Shift for Self-Supervised Learning
    Koohpayegani, Soroush Abbasi
    Tejankar, Ajinkya
    Pirsiavash, Hamed
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 10306 - 10315
  • [37] Synergistic Self-supervised and Quantization Learning
    Cao, Yun-Hao
    Sun, Peiqin
    Huang, Yechang
    Wu, Jianxin
    Zhou, Shuchang
    COMPUTER VISION - ECCV 2022, PT XXX, 2022, 13690 : 587 - 604
  • [38] On Feature Decorrelation in Self-Supervised Learning
    Hua, Tianyu
    Wang, Wenxiao
    Xue, Zihui
    Ren, Sucheng
    Wang, Yue
    Zhao, Hang
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9578 - 9588
  • [39] SELF-SUPERVISED LEARNING-MODEL
    SAGA, K
    SUGASAKA, T
    SEKIGUCHI, M
    FUJITSU SCIENTIFIC & TECHNICAL JOURNAL, 1993, 29 (03): : 209 - 216
  • [40] Graph Self-Supervised Learning: A Survey
    Liu, Yixin
    Jin, Ming
    Pan, Shirui
    Zhou, Chuan
    Zheng, Yu
    Xia, Feng
    Yu, Philip S.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (06) : 5879 - 5900