Adversarial Attacks on Medical Image Classification

被引:4
|
作者
Tsai, Min-Jen [1 ]
Lin, Ping-Yi [1 ]
Lee, Ming-En [1 ]
机构
[1] Natl Yang Ming Chiao Tung Univ, Inst Informat Management, Hsinchu 300, Taiwan
关键词
machine learning; artificial intelligence; adversarial learning; computer vision; metaheuristic;
D O I
10.3390/cancers15174228
中图分类号
R73 [肿瘤学];
学科分类号
100214 ;
摘要
Simple Summary As we increasingly rely on advanced imaging for medical diagnosis, it's vital that our computer programs can accurately interpret these images. Even a single mistaken pixel can lead to wrong predictions, potentially causing incorrect medical decisions. This study looks into how these tiny mistakes can trick our advanced algorithms. By changing just one or a few pixels on medical images, we tested how various computer models handled these changes. The findings showed that even small disruptions made it hard for the models to correctly interpret the images. This raises concerns about how reliable our current computer-aided diagnostic tools are and underscores the need for models that can resist such small disturbances.Abstract Due to the growing number of medical images being produced by diverse radiological imaging techniques, radiography examinations with computer-aided diagnoses could greatly assist clinical applications. However, an imaging facility with just a one-pixel inaccuracy will lead to the inaccurate prediction of medical images. Misclassification may lead to the wrong clinical decision. This scenario is similar to the adversarial attacks on deep learning models. Therefore, one-pixel and multi-pixel level attacks on a Deep Neural Network (DNN) model trained on various medical image datasets are investigated in this study. Common multiclass and multi-label datasets are examined for one-pixel type attacks. Moreover, different experiments are conducted in order to determine how changing the number of pixels in the image may affect the classification performance and robustness of diverse DNN models. The experimental results show that it was difficult for the medical images to survive the pixel attacks, raising the issue of the accuracy of medical image classification and the importance of the model's ability to resist these attacks for a computer-aided diagnosis.
引用
收藏
页数:22
相关论文
共 50 条
  • [41] A medical image classification method based on self-regularized adversarial learning
    Fan, Zong
    Zhang, Xiaohui
    Ruan, Su
    Thorstad, Wade
    Gay, Hiram
    Song, Pengfei
    Wang, Xiaowei
    Li, Hua
    [J]. MEDICAL PHYSICS, 2024,
  • [42] Systematic Review of Generative Adversarial Networks (GANs) for Medical Image Classification and Segmentation
    Jeong, Jiwoong J.
    Tariq, Amara
    Adejumo, Tobiloba
    Trivedi, Hari
    Gichoya, Judy W.
    Banerjee, Imon
    [J]. JOURNAL OF DIGITAL IMAGING, 2022, 35 (02) : 137 - 152
  • [43] Systematic Review of Generative Adversarial Networks (GANs) for Medical Image Classification and Segmentation
    Jiwoong J. Jeong
    Amara Tariq
    Tobiloba Adejumo
    Hari Trivedi
    Judy W. Gichoya
    Imon Banerjee
    [J]. Journal of Digital Imaging, 2022, 35 : 137 - 152
  • [44] RADAR-MIX: How to Uncover Adversarial Attacks in Medical Image Analysis through Explainability
    de Aguiar, Erikson J.
    Traina, Caetano, Jr.
    Traina, Agma J. M.
    [J]. 2024 IEEE 37TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS, CBMS 2024, 2024, : 436 - 441
  • [45] Adversarial Attacks and Defense on an Aircraft Classification Model Using a Generative Adversarial Network
    Colter, Jamison
    Kinnison, Matthew
    Henderson, Alex
    Harbour, Steven
    [J]. 2023 IEEE/AIAA 42ND DIGITAL AVIONICS SYSTEMS CONFERENCE, DASC, 2023,
  • [46] Geometric Stability Classification: Datasets, Metamodels, and Adversarial Attacks
    Lejeune, Emma
    [J]. COMPUTER-AIDED DESIGN, 2021, 131
  • [47] Adversarial Attacks on Graph Classification via Bayesian Optimisation
    Wan, Xingchen
    Kenlay, Henry
    Ru, Binxin
    Blaas, Arno
    Osborne, Michael A.
    Dong, Xiaowen
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [48] Robust Automatic Modulation Classification in the Presence of Adversarial Attacks
    Sahay, Rajeev
    Love, David J.
    Brinton, Christopher G.
    [J]. 2021 55TH ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2021,
  • [49] ADVERSARIAL PERTURBATION ATTACKS ON NESTED DICHOTOMIES CLASSIFICATION SYSTEMS
    Alkhouri, Ismail R.
    Velasquez, Alvaro
    Atia, George K.
    [J]. 2021 IEEE 31ST INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2021,
  • [50] A STUDY ON THE TRANSFERABILITY OF ADVERSARIAL ATTACKS IN SOUND EVENT CLASSIFICATION
    Subramanian, Vinod
    Pankajakshan, Arjun
    Benetos, Emmanouil
    Xu, Ning
    McDonald, SKoT
    Sandler, Mark
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 301 - 305