Adversarial Attacks on Medical Image Classification

被引:4
|
作者
Tsai, Min-Jen [1 ]
Lin, Ping-Yi [1 ]
Lee, Ming-En [1 ]
机构
[1] Natl Yang Ming Chiao Tung Univ, Inst Informat Management, Hsinchu 300, Taiwan
关键词
machine learning; artificial intelligence; adversarial learning; computer vision; metaheuristic;
D O I
10.3390/cancers15174228
中图分类号
R73 [肿瘤学];
学科分类号
100214 ;
摘要
Simple Summary As we increasingly rely on advanced imaging for medical diagnosis, it's vital that our computer programs can accurately interpret these images. Even a single mistaken pixel can lead to wrong predictions, potentially causing incorrect medical decisions. This study looks into how these tiny mistakes can trick our advanced algorithms. By changing just one or a few pixels on medical images, we tested how various computer models handled these changes. The findings showed that even small disruptions made it hard for the models to correctly interpret the images. This raises concerns about how reliable our current computer-aided diagnostic tools are and underscores the need for models that can resist such small disturbances.Abstract Due to the growing number of medical images being produced by diverse radiological imaging techniques, radiography examinations with computer-aided diagnoses could greatly assist clinical applications. However, an imaging facility with just a one-pixel inaccuracy will lead to the inaccurate prediction of medical images. Misclassification may lead to the wrong clinical decision. This scenario is similar to the adversarial attacks on deep learning models. Therefore, one-pixel and multi-pixel level attacks on a Deep Neural Network (DNN) model trained on various medical image datasets are investigated in this study. Common multiclass and multi-label datasets are examined for one-pixel type attacks. Moreover, different experiments are conducted in order to determine how changing the number of pixels in the image may affect the classification performance and robustness of diverse DNN models. The experimental results show that it was difficult for the medical images to survive the pixel attacks, raising the issue of the accuracy of medical image classification and the importance of the model's ability to resist these attacks for a computer-aided diagnosis.
引用
收藏
页数:22
相关论文
共 50 条
  • [1] Universal adversarial attacks on deep neural networks for medical image classification
    Hokuto Hirano
    Akinori Minagi
    Kazuhiro Takemoto
    [J]. BMC Medical Imaging, 21
  • [2] Universal adversarial attacks on deep neural networks for medical image classification
    Hirano, Hokuto
    Minagi, Akinori
    Takemoto, Kazuhiro
    [J]. BMC MEDICAL IMAGING, 2021, 21 (01)
  • [3] Mitigating adversarial evasion attacks by deep active learning for medical image classification
    Usman Ahmed
    Jerry Chun-Wei Lin
    Gautam Srivastava
    [J]. Multimedia Tools and Applications, 2022, 81 : 41899 - 41910
  • [4] Mitigating adversarial evasion attacks by deep active learning for medical image classification
    Ahmed, Usman
    Lin, Jerry Chun-Wei
    Srivastava, Gautam
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (29) : 41899 - 41910
  • [5] Malicious Adversarial Attacks on Medical Image Analysis
    Winter, Thomas C.
    [J]. AMERICAN JOURNAL OF ROENTGENOLOGY, 2020, 215 (05) : W55 - W55
  • [6] A Study of Adversarial Attacks on Malaria Cell Image Classification
    Pervin, Tasnim
    Huq, Aminul
    [J]. 2021 IEEE INTERNATIONAL WOMEN IN ENGINEERING (WIE) CONFERENCE ON ELECTRICAL AND COMPUTER ENGINEERING (WIECON-ECE), 2022, : 79 - 82
  • [7] Exploring the feasibility of adversarial attacks on medical image segmentation
    Shukla, Sneha
    Gupta, Anup Kumar
    Gupta, Puneet
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (04) : 11745 - 11768
  • [8] MITIGATING ADVERSARIAL ATTACKS ON MEDICAL IMAGE UNDERSTANDING SYSTEMS
    Paul, Rahul
    Schabath, Matthew
    Gillies, Robert
    Hall, Lawrence
    Goldgof, Dmitry
    [J]. 2020 IEEE 17TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2020), 2020, : 1517 - 1521
  • [9] Reply to "Malicious Adversarial Attacks on Medical Image Analysis"
    Desjardins, Benoit
    Ritenour, E. Russell
    [J]. AMERICAN JOURNAL OF ROENTGENOLOGY, 2020, 215 (05) : W56 - W56
  • [10] Exploring the feasibility of adversarial attacks on medical image segmentation
    Sneha Shukla
    Anup Kumar Gupta
    Puneet Gupta
    [J]. Multimedia Tools and Applications, 2024, 83 : 11745 - 11768