Exponential concentration in quantum kernel methods

被引:14
|
作者
Thanasilp, Supanut [1 ,2 ,3 ]
Wang, Samson [4 ]
Cerezo, M. [5 ,6 ]
Holmes, Zoe [2 ,5 ]
机构
[1] Natl Univ Singapore, Ctr Quantum Technol, 3 Sci Dr 2, Singapore, Singapore
[2] Ecole Polytech Fed Lausanne EPFL, Inst Phys, Lausanne, Switzerland
[3] Chulalongkorn Univ, Fac Sci, Dept Phys, Chula Intelligent & Complex Syst, Bangkok, Thailand
[4] Imperial Coll London, London, England
[5] Los Alamos Natl Lab, Informat Sci, Los Alamos, NM 87545 USA
[6] Quantum Sci Ctr, Oak Ridge, TN USA
基金
新加坡国家研究基金会;
关键词
D O I
10.1038/s41467-024-49287-w
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Kernel methods in Quantum Machine Learning (QML) have recently gained significant attention as a potential candidate for achieving a quantum advantage in data analysis. Among other attractive properties, when training a kernel-based model one is guaranteed to find the optimal model's parameters due to the convexity of the training landscape. However, this is based on the assumption that the quantum kernel can be efficiently obtained from quantum hardware. In this work we study the performance of quantum kernel models from the perspective of the resources needed to accurately estimate kernel values. We show that, under certain conditions, values of quantum kernels over different input data can be exponentially concentrated (in the number of qubits) towards some fixed value. Thus on training with a polynomial number of measurements, one ends up with a trivial model where the predictions on unseen inputs are independent of the input data. We identify four sources that can lead to concentration including expressivity of data embedding, global measurements, entanglement and noise. For each source, an associated concentration bound of quantum kernels is analytically derived. Lastly, we show that when dealing with classical data, training a parametrized data embedding with a kernel alignment method is also susceptible to exponential concentration. Our results are verified through numerical simulations for several QML tasks. Altogether, we provide guidelines indicating that certain features should be avoided to ensure the efficient evaluation of quantum kernels and so the performance of quantum kernel methods. Quantum kernel methods are usually believed to enjoy better trainability than quantum neural networks which may suffer from a well-studied barren plateau. Here, building over previous evidence, the authors show that practical implications of exponential concentration result in a trivial data-insensitive model after training, and identify commonly used features that induce the concentration.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Kernel methods and the exponential family
    Canu, S
    Smola, A
    NEUROCOMPUTING, 2006, 69 (7-9) : 714 - 720
  • [2] Quantum adversarial learning for kernel methods
    Montalbano, Giuseppe
    Banchi, Leonardo
    QUANTUM MACHINE INTELLIGENCE, 2025, 7 (01)
  • [3] Kernel methods in Quantum Machine Learning
    Mengoni, Riccardo
    Di Pierro, Alessandra
    QUANTUM MACHINE INTELLIGENCE, 2019, 1 (3-4) : 65 - 71
  • [4] A hyperparameter study for quantum kernel methods
    Egginger, Sebastian
    Sakhnenko, Alona
    Lorenz, Jeanette Miriam
    QUANTUM MACHINE INTELLIGENCE, 2024, 6 (02)
  • [5] Kernel methods in Quantum Machine Learning
    Riccardo Mengoni
    Alessandra Di Pierro
    Quantum Machine Intelligence, 2019, 1 : 65 - 71
  • [6] Quantum Phase Recognition via Quantum Kernel Methods
    Wu, Yusen
    Wu, Bujiao
    Wang, Jingbo B.
    Yuan, Xiao
    QUANTUM, 2023, 7
  • [7] Quantum machine learning beyond kernel methods
    Sofiene Jerbi
    Lukas J. Fiderer
    Hendrik Poulsen Nautrup
    Jonas M. Kübler
    Hans J. Briegel
    Vedran Dunjko
    Nature Communications, 14
  • [8] Generalization error of random feature and kernel methods: Hypercontractivity and kernel matrix concentration
    Mei, Song
    Misiakiewicz, Theodor
    Montanari, Andrea
    APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS, 2022, 59 : 3 - 84
  • [9] Quantum machine learning beyond kernel methods
    Jerbi, Sofiene
    Fiderer, Lukas J.
    Poulsen Nautrup, Hendrik
    Kuebler, Jonas M.
    Briegel, Hans J.
    Dunjko, Vedran
    NATURE COMMUNICATIONS, 2023, 14 (01)
  • [10] NUMERICAL METHODS FOR THE CAPUTO-TYPE FRACTIONAL DERIVATIVE WITH AN EXPONENTIAL KERNEL
    Fan, Enyu
    Li, Changpin
    Li, Zhiqiang
    JOURNAL OF APPLIED ANALYSIS AND COMPUTATION, 2023, 13 (01): : 376 - 423