Exploring the feasibility of adversarial attacks on medical image segmentation

被引:2
|
作者
Shukla, Sneha [1 ]
Gupta, Anup Kumar [1 ]
Gupta, Puneet [1 ]
机构
[1] IIT Indore, Dept Comp Sci & Engn, Indore, Madhya Pradesh, India
关键词
Adversarial attack; Deep neural network; Medical image segmentation; Semantic segmentation; Surrogate loss function; ARCHITECTURE;
D O I
10.1007/s11042-023-15575-8
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent advancements in Deep Learning (DL) based medical image segmentation models have led to tremendous growth in healthcare applications. However, DL models can be easily compromised by intelligently engineered adversarial attacks, which pose a serious threat to the security of life-critical healthcare applications. Thus, understanding the generation of adversarial attacks is essential for designing robust and reliable DL based healthcare models. To this end, we explore adversarial attacks for medical image segmentation models in this paper. The adversarial attacks are performed by backpropagating the loss function, which minimises the error metrics. However, most of the medical image segmentation models utilise several non-differential loss functions, which obstruct the attack. Consequently, the attacks are performed by surrogate loss functions that are differentiable approximations of the original loss function. However, we observe that different surrogate loss functions behave differently for the same input. Hence, choosing the best surrogate loss function for a successful attack is crucial. Furthermore, these DL models contain non-differentiable layers that obfuscate gradients and obstruct the attack. To mitigate these issues, we introduce an attack, MedIS (Medical Image Segmentation), which utilises parallel fusion for selecting the best surrogate loss function with the least added perturbation. Moreover, our proposed MedIS attack also provides guidelines to tackle non-differentiable layers by replacing them with differentiable approximations. The experiments conducted on several well-known medical image segmentation models employing multiple surrogate loss functions reveal that MedIS outperforms existing attacks on medical image segmentation by providing a higher attack success rate.
引用
收藏
页码:11745 / 11768
页数:24
相关论文
共 50 条
  • [1] Exploring the feasibility of adversarial attacks on medical image segmentation
    Sneha Shukla
    Anup Kumar Gupta
    Puneet Gupta
    [J]. Multimedia Tools and Applications, 2024, 83 : 11745 - 11768
  • [2] Adversarial Attacks on Medical Image Classification
    Tsai, Min-Jen
    Lin, Ping-Yi
    Lee, Ming-En
    [J]. CANCERS, 2023, 15 (17)
  • [3] Malicious Adversarial Attacks on Medical Image Analysis
    Winter, Thomas C.
    [J]. AMERICAN JOURNAL OF ROENTGENOLOGY, 2020, 215 (05) : W55 - W55
  • [4] Adversarial Attacks for Image Segmentation on Multiple Lightweight Models
    Kang, Xu
    Song, Bin
    Du, Xiaojiang
    Guizani, Mohsen
    [J]. IEEE ACCESS, 2020, 8 : 31359 - 31370
  • [5] Exploring Adversarial Attacks in Federated Learning for Medical Imaging
    Darzi, Erfan
    Dubost, Florian
    Sijtsema, Nanna. M.
    van Ooijen, P. M. A.
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, : 13591 - 13599
  • [6] Adversarial attacks and adversarial training for burn image segmentation based on deep learning
    Chen, Luying
    Liang, Jiakai
    Wang, Chao
    Yue, Keqiang
    Li, Wenjun
    Fu, Zhihui
    [J]. MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING, 2024, 62 (09) : 2717 - 2735
  • [7] Reply to "Malicious Adversarial Attacks on Medical Image Analysis"
    Desjardins, Benoit
    Ritenour, E. Russell
    [J]. AMERICAN JOURNAL OF ROENTGENOLOGY, 2020, 215 (05) : W56 - W56
  • [8] MITIGATING ADVERSARIAL ATTACKS ON MEDICAL IMAGE UNDERSTANDING SYSTEMS
    Paul, Rahul
    Schabath, Matthew
    Gillies, Robert
    Hall, Lawrence
    Goldgof, Dmitry
    [J]. 2020 IEEE 17TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2020), 2020, : 1517 - 1521
  • [9] Exploring adversarial image attacks on deep learning models in oncology
    Joel, Marina
    Umrao, Sachin
    Chang, Enoch
    Choi, Rachel
    Yang, Daniel
    Gilson, Aidan
    Herbst, Roy
    Krumholz, Harlan
    Aneja, Sanjay
    [J]. CLINICAL CANCER RESEARCH, 2021, 27 (05)
  • [10] Exploring the Adversarial Robustness of Video Object Segmentation via One-shot Adversarial Attacks
    Jiang, Kaixun
    Hong, Lingyi
    Chen, Zhaoyu
    Guo, Pinxue
    Tao, Zeng
    Wang, Yan
    Zhang, Wenqiang
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 8598 - 8607