Understanding adversarial attacks on deep learning based medical image analysis systems

被引:220
|
作者
Ma, Xingjun [2 ]
Niu, Yuhao [1 ,3 ]
Gu, Lin [4 ]
Yisen, Wang [5 ]
Zhao, Yitian [6 ]
Bailey, James [2 ]
Lu, Feng [1 ,3 ]
机构
[1] Beihang Univ, Sch Comp Sci & Engn, State Key Lab VR Technol & Syst, Beijing, Peoples R China
[2] Univ Melbourne, Sch Comp & Informat Syst, Parkville, Vic 3010, Australia
[3] Beihang Univ, Beijing Adv Innovat Ctr Big Data Based Precis Med, Beijing, Peoples R China
[4] Natl Inst Informat, Tokyo 1018430, Japan
[5] Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, Shanghai, Peoples R China
[6] Chinese Acad Sci, Ningbo Inst Ind Technol, Cixi Instuitue Biomed Engn, Ningbo, Peoples R China
基金
中国国家自然科学基金;
关键词
Adversarial attack; Adversarial example detection; Medical image analysis; Deep learning; ROBUSTNESS;
D O I
10.1016/j.patcog.2020.107332
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks (DNNs) have become popular for medical image analysis tasks like cancer diagno-sis and lesion detection. However, a recent study demonstrates that medical deep learning systems can be compromised by carefully-engineered adversarial examples/attacks with small imperceptible perturbations. This raises safety concerns about the deployment of these systems in clinical settings. In this paper, we provide a deeper understanding of adversarial examples in the context of medical images. We find that medical DNN models can be more vulnerable to adversarial attacks compared to models for natural images, according to two different viewpoints. Surprisingly, we also find that medical adversarial attacks can be easily detected, i.e., simple detectors can achieve over 98% detection AUC against state-of-the-art attacks, due to fundamental feature differences compared to normal examples. We believe these findings may be a useful basis to approach the design of more explainable and secure medical deep learning systems. (c) 2020 Elsevier Ltd. All rights reserved.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] MITIGATING ADVERSARIAL ATTACKS ON MEDICAL IMAGE UNDERSTANDING SYSTEMS
    Paul, Rahul
    Schabath, Matthew
    Gillies, Robert
    Hall, Lawrence
    Goldgof, Dmitry
    [J]. 2020 IEEE 17TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2020), 2020, : 1517 - 1521
  • [2] Adversarial examples: attacks and defences on medical deep learning systems
    Murali Krishna Puttagunta
    S. Ravi
    C Nelson Kennedy Babu
    [J]. Multimedia Tools and Applications, 2023, 82 : 33773 - 33809
  • [3] Adversarial examples: attacks and defences on medical deep learning systems
    Puttagunta, Murali Krishna
    Ravi, S.
    Babu, C. Nelson Kennedy
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (22) : 33773 - 33809
  • [4] Adversarial attacks and adversarial training for burn image segmentation based on deep learning
    Chen, Luying
    Liang, Jiakai
    Wang, Chao
    Yue, Keqiang
    Li, Wenjun
    Fu, Zhihui
    [J]. MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING, 2024, 62 (09) : 2717 - 2735
  • [5] Mitigating adversarial evasion attacks by deep active learning for medical image classification
    Usman Ahmed
    Jerry Chun-Wei Lin
    Gautam Srivastava
    [J]. Multimedia Tools and Applications, 2022, 81 : 41899 - 41910
  • [6] Mitigating adversarial evasion attacks by deep active learning for medical image classification
    Ahmed, Usman
    Lin, Jerry Chun-Wei
    Srivastava, Gautam
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (29) : 41899 - 41910
  • [7] Understanding adversarial attacks on observations in deep reinforcement learning
    You, Qiaoben
    Ying, Chengyang
    Zhou, Xinning
    Su, Hang
    Zhu, Jun
    Zhang, Bo
    [J]. SCIENCE CHINA-INFORMATION SCIENCES, 2024, 67 (05)
  • [8] Understanding adversarial attacks on observations in deep reinforcement learning
    You QIAOBEN
    Chengyang YING
    Xinning ZHOU
    Hang SU
    Jun ZHU
    Bo ZHANG
    [J]. Science China(Information Sciences), 2024, 67 (05) : 69 - 83
  • [9] Malicious Adversarial Attacks on Medical Image Analysis
    Winter, Thomas C.
    [J]. AMERICAN JOURNAL OF ROENTGENOLOGY, 2020, 215 (05) : W55 - W55
  • [10] A Survey on Adversarial Deep Learning Robustness in Medical Image Analysis
    Apostolidis, Kyriakos D.
    Papakostas, George A.
    [J]. ELECTRONICS, 2021, 10 (17)