共 50 条
Stochastic security as a performance metric for quantum-enhanced generative AI
被引:0
|作者:
Crum, Noah A.
[1
]
Sunny, Leanto
[1
]
Ronagh, Pooya
[2
,3
,4
,5
]
Laflamme, Raymond
[2
,3
,4
]
Balu, Radhakrishnan
[6
,7
]
Siopsis, George
[1
]
机构:
[1] Univ Tennessee, Dept Phys & Astron, Knoxville, TN 37996 USA
[2] Univ Waterloo, Inst Quantum Comp, Waterloo, ON N2L 3G1, Canada
[3] Univ Waterloo, Dept Phys & Astron, Waterloo, ON N2L 3G1, Canada
[4] Perimeter Inst Theoret Phys, Waterloo, ON N2L 2Y5, Canada
[5] 1QB Informat Technol 1QBit, Vancouver, BC V6E 4B1, Canada
[6] Army Res Lab, Comp & Informat Sci Directorate, Adelphi, MD 21005 USA
[7] Univ Maryland, Dept Math, College Pk, MD 20742 USA
关键词:
Generative modeling;
Energy-based models;
Adversarial attacks;
Stochastic security;
Quantum Gibbs sampling;
Diffusion processes;
Stochastic gradient Langevin dynamics;
LEARNING ALGORITHM;
D O I:
10.1007/s42484-025-00256-6
中图分类号:
TP18 [人工智能理论];
学科分类号:
081104 ;
0812 ;
0835 ;
1405 ;
摘要:
Motivated by applications of quantum computers in Gibbs sampling from continuous real-valued functions, we ask whether such algorithms can provide practical advantages for machine learning models trained on classical data and seek measures for quantifying such impacts. In this study, we focus on deep energy-based models (EBM), as they require continuous-domain Gibbs sampling both during training and inference. In lieu of fault-tolerant quantum computers that can execute quantum Gibbs sampling algorithms, we use the Monte Carlo simulation of diffusion processes as a classical alternative. More specifically, we investigate whether long-run persistent chain Monte Carlo simulation of Langevin dynamics improves the quality of the representations achieved by EBMs. We consider a scheme in which the Monte Carlo simulation of a diffusion, whose drift is given by the gradient of the energy function, is used to improve the adversarial robustness and calibration score of an independent classifier network. Our results show that increasing the computational budget of Gibbs sampling in persistent contrastive divergence improves both the calibration and adversarial robustness of the model, suggesting a prospective avenue of quantum advantage for generative AI using future large-scale quantum computers.
引用
收藏
页数:13
相关论文