Universal adversarial attacks on deep neural networks for medical image classification

被引:75
|
作者
Hirano, Hokuto [1 ]
Minagi, Akinori [1 ]
Takemoto, Kazuhiro [1 ]
机构
[1] Kyushu Inst Technol, Dept Biosci & Bioinformat, Iizuka, Fukuoka 8208502, Japan
关键词
Deep neural networks; Medical imaging; Adversarial attacks; Security and privacy; DISEASES;
D O I
10.1186/s12880-020-00530-y
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Background: Deep neural networks (DNNs) are widely investigated in medical image classification to achieve automated support for clinical diagnosis. It is necessary to evaluate the robustness of medical DNN tasks against adversarial attacks, as high-stake decision-making will be made based on the diagnosis. Several previous studies have considered simple adversarial attacks. However, the vulnerability of DNNs to more realistic and higher risk attacks, such as universal adversarial perturbation (UAP), which is a single perturbation that can induce DNN failure in most classification tasks has not been evaluated yet. Methods: We focus on three representative DNN-based medical image classification tasks (i.e., skin cancer, referable diabetic retinopathy, and pneumonia classifications) and investigate their vulnerability to the seven model architectures of UAPs. Results: We demonstrate that DNNs are vulnerable to both nontargeted UAPs, which cause a task failure resulting in an input being assigned an incorrect class, and to targeted UAPs, which cause the DNN to classify an input into a specific class. The almost imperceptible UAPs achieved > 80% success rates for nontargeted and targeted attacks. The vulnerability to UAPs depended very little on the model architecture. Moreover, we discovered that adversarial retraining, which is known to be an effective method for adversarial defenses, increased DNNs' robustness against UAPs in only very few cases. Conclusion: Unlike previous assumptions, the results indicate that DNN-based clinical diagnosis is easier to deceive because of adversarial attacks. Adversaries can cause failed diagnoses at lower costs (e.g., without consideration of data distribution); moreover, they can affect the diagnosis. The effects of adversarial defenses may not be limited. Our findings emphasize that more careful consideration is required in developing DNNs for medical imaging and their practical applications.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Universal adversarial attacks on deep neural networks for medical image classification
    Hokuto Hirano
    Akinori Minagi
    Kazuhiro Takemoto
    [J]. BMC Medical Imaging, 21
  • [2] Simple Black-Box Universal Adversarial Attacks on Deep Neural Networks for Medical Image Classification
    Koga, Kazuki
    Takemoto, Kazuhiro
    [J]. ALGORITHMS, 2022, 15 (05)
  • [3] Natural Images Allow Universal Adversarial Attacks on Medical Image Classification Using Deep Neural Networks with Transfer Learning
    Minagi, Akinori
    Hirano, Hokuto
    Takemoto, Kauzhiro
    [J]. JOURNAL OF IMAGING, 2022, 8 (02)
  • [4] Adversarial Attacks on Deep Neural Networks for Time Series Classification
    Fawaz, Hassan Ismail
    Forestier, Germain
    Weber, Jonathan
    Idoumghar, Lhassane
    Muller, Pierre-Alain
    [J]. 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [5] Adversarial Attacks on Medical Image Classification
    Tsai, Min-Jen
    Lin, Ping-Yi
    Lee, Ming-En
    [J]. CANCERS, 2023, 15 (17)
  • [6] Grasping Adversarial Attacks on Deep Convolutional Neural Networks for Cholangiocarcinoma Classification
    Diyasa, I. Gede Susrama Mas
    Wahid, Radical Rakhman
    Amiruddin, Brilian Putra
    [J]. 2021 INTERNATIONAL CONFERENCE ON E-HEALTH AND BIOENGINEERING (EHB 2021), 9TH EDITION, 2021,
  • [7] Robust Adversarial Attacks on Imperfect Deep Neural Networks in Fault Classification
    Jiang, Xiaoyu
    Kong, Xiangyin
    Zheng, Junhua
    Ge, Zhiqiang
    Zhang, Xinmin
    Song, Zhihuan
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024,
  • [8] Robustness and Transferability of Adversarial Attacks on Different Image Classification Neural Networks
    Smagulova, Kamilya
    Bacha, Lina
    Fouda, Mohammed E.
    Kanj, Rouwaida
    Eltawil, Ahmed
    [J]. ELECTRONICS, 2024, 13 (03)
  • [9] An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks
    Zhao, Pu
    Liu, Sijia
    Wang, Yanzhi
    Lin, Xue
    [J]. PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), 2018, : 1065 - 1073
  • [10] Frequency constraint-based adversarial attack on deep neural networks for medical image classification
    Chen, Fang
    Wang, Jian
    Liu, Han
    Kong, Wentao
    Zhao, Zhe
    Ma, Longfei
    Liao, Hongen
    Zhang, Daoqiang
    [J]. COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 164