Quantum adversarial machine learning

被引:58
|
作者
Lu, Sirui [1 ,2 ]
Duan, Lu-Ming [1 ]
Deng, Dong-Ling [1 ,3 ]
机构
[1] Tsinghua Univ, IIIS, Ctr Quantum Informat, Beijing 100084, Peoples R China
[2] Max Planck Inst Quantum Opt, Hans Kopfermann Str 1, D-85748 Garching, Germany
[3] Shanghai Qi Zhi Inst, 41th Floor,AI Tower,701 Yunjin Rd, Shanghai 200232, Peoples R China
来源
PHYSICAL REVIEW RESEARCH | 2020年 / 2卷 / 03期
关键词
NEURAL-NETWORKS; PHASE-TRANSITIONS; GAME; GO;
D O I
10.1103/PhysRevResearch.2.033212
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Adversarial machine learning is an emerging field that focuses on studying vulnerabilities of machine learning approaches in adversarial settings and developing techniques accordingly to make learning robust to adversarial manipulations. It plays a vital role in various machine learning applications and recently has attracted tremendous attention across different communities. In this paper, we explore different adversarial scenarios in the context of quantum machine learning. We find that, similar to traditional classifiers based on classical neural networks, quantum learning systems are likewise vulnerable to crafted adversarial examples, independent of whether the input data is classical or quantum. In particular, we find that a quantum classifier that achieves nearly the state-of-the-art accuracy can be conclusively deceived by adversarial examples obtained via adding imperceptible perturbations to the original legitimate samples. This is explicitly demonstrated with quantum adversarial learning in different scenarios, including classifying real-life images (e.g., handwritten digit images in the dataset MNIST), learning phases of matter (such as ferromagnetic/paramagnetic orders and symmetry protected topological phases), and classifying quantum data. Furthermore, we show that based on the information of the adversarial examples at hand, practical defense strategies can be designed to fight against a number of different attacks. Our results uncover the notable vulnerability of quantum machine learning systems to adversarial perturbations, which not only reveals another perspective in bridging machine learning and quantum physics in theory but also provides valuable guidance for practical applications of quantum classifiers based on both near-term and future quantum technologies.
引用
下载
收藏
页数:22
相关论文
共 50 条
  • [1] Towards quantum enhanced adversarial robustness in machine learning
    Maxwell T. West
    Shu-Lok Tsang
    Jia S. Low
    Charles D. Hill
    Christopher Leckie
    Lloyd C. L. Hollenberg
    Sarah M. Erfani
    Muhammad Usman
    Nature Machine Intelligence, 2023, 5 : 581 - 589
  • [2] Towards quantum enhanced adversarial robustness in machine learning
    West, Maxwell T.
    Tsang, Shu-Lok
    Low, Jia S.
    Hill, Charles D.
    Leckie, Christopher
    Hollenberg, Lloyd C. L.
    Erfani, Sarah M.
    Usman, Muhammad
    NATURE MACHINE INTELLIGENCE, 2023, 5 (06) : 581 - 589
  • [3] Quantum Adversarial Machine Learning: Status, Challenges and Perspectives
    Edwards, DeMarcus
    Rawat, Danda B.
    2020 SECOND IEEE INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS AND APPLICATIONS (TPS-ISA 2020), 2020, : 128 - 133
  • [4] Robust in practice: Adversarial attacks on quantum machine learning
    Liao, Haoran
    Convy, Ian
    Huggins, William J.
    Whaley, K. Birgitta
    PHYSICAL REVIEW A, 2021, 103 (04)
  • [5] Variational Quantum Generators: Generative Adversarial Quantum Machine Learning for Continuous Distributions
    Romero, Jonathan
    Aspuru-Guzik, Alan
    ADVANCED QUANTUM TECHNOLOGIES, 2021, 4 (01)
  • [6] Adversarial Machine Learning
    Tygar, J. D.
    IEEE INTERNET COMPUTING, 2011, 15 (05) : 4 - 6
  • [7] Exploring the Vulnerabilities of Machine Learning and Quantum Machine Learning to Adversarial Attacks using a Malware Dataset: A Comparative Analysis
    Akter, Mst Shapna
    Shahriar, Hossain
    Iqbal, Iysa
    Hossain, M. D.
    Karim, M. A.
    Clincy, Victor
    Voicu, Razvan
    2023 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE SERVICES ENGINEERING, SSE, 2023, : 222 - 231
  • [8] Adversarial Machine Learning for Text
    Lee, Daniel
    Verma, Rakesh
    PROCEEDINGS OF THE SIXTH INTERNATIONAL WORKSHOP ON SECURITY AND PRIVACY ANALYTICS (IWSPA'20), 2020, : 33 - 34
  • [9] Machine Learning in Adversarial Settings
    McDaniel, Patrick
    Papernot, Nicolas
    Celik, Z. Berkay
    IEEE SECURITY & PRIVACY, 2016, 14 (03) : 68 - 72
  • [10] Machine learning in adversarial environments
    Laskov, Pavel
    Lippmann, Richard
    MACHINE LEARNING, 2010, 81 (02) : 115 - 119