Universal adversarial attacks on deep neural networks for medical image classification

被引:74
|
作者
Hirano, Hokuto [1 ]
Minagi, Akinori [1 ]
Takemoto, Kazuhiro [1 ]
机构
[1] Kyushu Inst Technol, Dept Biosci & Bioinformat, Iizuka, Fukuoka 8208502, Japan
关键词
Deep neural networks; Medical imaging; Adversarial attacks; Security and privacy; DISEASES;
D O I
10.1186/s12880-020-00530-y
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Background: Deep neural networks (DNNs) are widely investigated in medical image classification to achieve automated support for clinical diagnosis. It is necessary to evaluate the robustness of medical DNN tasks against adversarial attacks, as high-stake decision-making will be made based on the diagnosis. Several previous studies have considered simple adversarial attacks. However, the vulnerability of DNNs to more realistic and higher risk attacks, such as universal adversarial perturbation (UAP), which is a single perturbation that can induce DNN failure in most classification tasks has not been evaluated yet. Methods: We focus on three representative DNN-based medical image classification tasks (i.e., skin cancer, referable diabetic retinopathy, and pneumonia classifications) and investigate their vulnerability to the seven model architectures of UAPs. Results: We demonstrate that DNNs are vulnerable to both nontargeted UAPs, which cause a task failure resulting in an input being assigned an incorrect class, and to targeted UAPs, which cause the DNN to classify an input into a specific class. The almost imperceptible UAPs achieved > 80% success rates for nontargeted and targeted attacks. The vulnerability to UAPs depended very little on the model architecture. Moreover, we discovered that adversarial retraining, which is known to be an effective method for adversarial defenses, increased DNNs' robustness against UAPs in only very few cases. Conclusion: Unlike previous assumptions, the results indicate that DNN-based clinical diagnosis is easier to deceive because of adversarial attacks. Adversaries can cause failed diagnoses at lower costs (e.g., without consideration of data distribution); moreover, they can affect the diagnosis. The effects of adversarial defenses may not be limited. Our findings emphasize that more careful consideration is required in developing DNNs for medical imaging and their practical applications.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] A survey on the vulnerability of deep neural networks against adversarial attacks
    Andy Michel
    Sumit Kumar Jha
    Rickard Ewetz
    [J]. Progress in Artificial Intelligence, 2022, 11 : 131 - 141
  • [32] Reinforced Adversarial Attacks on Deep Neural Networks Using ADMM
    Zhao, Pu
    Xu, Kaidi
    Zhang, Tianyun
    Fardad, Makan
    Wang, Yanzhi
    Lin, Xue
    [J]. 2018 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP 2018), 2018, : 1169 - 1173
  • [33] Adversarial Attacks on Deep Neural Networks Based Modulation Recognition
    Liu, Mingqian
    Zhang, Zhenju
    Zhao, Nan
    Chen, Yunfei
    [J]. IEEE INFOCOM 2022 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (INFOCOM WKSHPS), 2022,
  • [34] Adversarial Attacks and Defenses Against Deep Neural Networks: A Survey
    Ozdag, Mesut
    [J]. CYBER PHYSICAL SYSTEMS AND DEEP LEARNING, 2018, 140 : 152 - 161
  • [35] Adversarial Evasion Attacks to Deep Neural Networks in ECR Models
    Nemoto, Shota
    Rajapaksha, Subhash
    Perouli, Despoina
    [J]. HEALTHINF: PROCEEDINGS OF THE 15TH INTERNATIONAL JOINT CONFERENCE ON BIOMEDICAL ENGINEERING SYSTEMS AND TECHNOLOGIES - VOL 5: HEALTHINF, 2021, : 135 - 141
  • [36] A survey on the vulnerability of deep neural networks against adversarial attacks
    Michel, Andy
    Jha, Sumit Kumar
    Ewetz, Rickard
    [J]. PROGRESS IN ARTIFICIAL INTELLIGENCE, 2022, 11 (02) : 131 - 141
  • [37] Medical Image Synthesis with Deep Convolutional Adversarial Networks
    Nie, Dong
    Trullo, Roger
    Lian, Jun
    Wang, Li
    Petitjean, Caroline
    Ruan, Su
    Wang, Qian
    Shen, Dinggang
    [J]. IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2018, 65 (12) : 2720 - 2730
  • [38] Assessing the Threat of Adversarial Examples on Deep Neural Networks for Remote Sensing Scene Classification: Attacks and Defenses
    Xu, Yonghao
    Du, Bo
    Zhang, Liangpei
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2021, 59 (02): : 1604 - 1617
  • [39] Adversarial Attacks with Defense Mechanisms on Convolutional Neural Networks and Recurrent Neural Networks for Malware Classification
    Alzaidy, Sharoug
    Binsalleeh, Hamad
    [J]. APPLIED SCIENCES-BASEL, 2024, 14 (04):
  • [40] Encoding Generative Adversarial Networks for Defense Against Image Classification Attacks
    Perez-Bravo, Jose M.
    Rodriguez-Rodriguez, Jose A.
    Garcia-Gonzalez, Jorge
    Molina-Cabello, Miguel A.
    Thurnhofer-Hemsi, Karl
    Lopez-Rubio, Ezequiel
    [J]. BIO-INSPIRED SYSTEMS AND APPLICATIONS: FROM ROBOTICS TO AMBIENT INTELLIGENCE, PT II, 2022, 13259 : 163 - 172