Adversarial attacks and adversarial training for burn image segmentation based on deep learning

被引:0
|
作者
Chen, Luying [1 ]
Liang, Jiakai [1 ]
Wang, Chao [1 ]
Yue, Keqiang [1 ]
Li, Wenjun [1 ]
Fu, Zhihui [2 ]
机构
[1] Hangzhou Dianzi Univ, Zhejiang Integrated Circuits & Intelligent Hardwar, Hangzhou 317300, Peoples R China
[2] Zhejiang Univ, Affiliated Hosp 2, Sch Med, Hangzhou 310009, Peoples R China
关键词
Deep learning; Burn images; Adversarial attack; Adversarial training; Image segmentation; CLASSIFICATION; DISEASES; DEPTH;
D O I
10.1007/s11517-024-03098-9
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Deep learning has been widely applied in the fields of image classification and segmentation, while adversarial attacks can impact the model's results in image segmentation and classification. Especially in medical images, due to constraints from factors like shooting angles, environmental lighting, and diverse photography devices, medical images typically contain various forms of noise. In order to address the impact of these physically meaningful disturbances on existing deep learning models in the application of burn image segmentation, we simulate attack methods inspired by natural phenomena and propose an adversarial training approach specifically designed for burn image segmentation. The method is tested on our burn dataset. Through the defensive training using our approach, the segmentation accuracy of adversarial samples, initially at 54%, is elevated to 82.19%, exhibiting a 1.97% improvement compared to conventional adversarial training methods, while substantially reducing the training time. Ablation experiments validate the effectiveness of individual losses, and we assess and compare training results with different adversarial samples using various metrics.
引用
收藏
页码:2717 / 2735
页数:19
相关论文
共 50 条
  • [1] Adversarial Deep Learning: A Survey on Adversarial Attacks and Defense Mechanisms on Image Classification
    Khamaiseh, Samer Y.
    Bagagem, Derek
    Al-Alaj, Abdullah
    Mancino, Mathew
    Alomari, Hakam W.
    [J]. IEEE ACCESS, 2022, 10 : 102266 - 102291
  • [2] Adversarial attacks on deep-learning-based SAR image target recognition
    Huang, Teng
    Zhang, Qixiang
    Liu, Jiabao
    Hou, Ruitao
    Wang, Xianmin
    Li, Ya
    [J]. JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2020, 162
  • [3] Understanding adversarial attacks on deep learning based medical image analysis systems
    Ma, Xingjun
    Niu, Yuhao
    Gu, Lin
    Yisen, Wang
    Zhao, Yitian
    Bailey, James
    Lu, Feng
    [J]. PATTERN RECOGNITION, 2021, 110
  • [4] Adversarial attacks and defenses in deep learning for image recognition: A survey
    Wang, Jia
    Wang, Chengyu
    Lin, Qiuzhen
    Luo, Chengwen
    Wu, Chao
    Li, Jianqiang
    [J]. NEUROCOMPUTING, 2022, 514 : 162 - 181
  • [5] Exploring adversarial image attacks on deep learning models in oncology
    Joel, Marina
    Umrao, Sachin
    Chang, Enoch
    Choi, Rachel
    Yang, Daniel
    Gilson, Aidan
    Herbst, Roy
    Krumholz, Harlan
    Aneja, Sanjay
    [J]. CLINICAL CANCER RESEARCH, 2021, 27 (05)
  • [6] Adversarial Attacks and Defenses in Deep Learning
    Ren, Kui
    Zheng, Tianhang
    Qin, Zhan
    Liu, Xue
    [J]. ENGINEERING, 2020, 6 (03) : 346 - 360
  • [7] Adversarial Training Against Adversarial Attacks for Machine Learning-Based Intrusion Detection Systems
    Haroon, Muhammad Shahzad
    Ali, Husnain Mansoor
    [J]. CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 73 (02): : 3513 - 3527
  • [8] On the Effectiveness of Adversarial Training in Defending against Adversarial Example Attacks for Image Classification
    Park, Sanglee
    So, Jungmin
    [J]. APPLIED SCIENCES-BASEL, 2020, 10 (22): : 1 - 16
  • [9] Exploring the feasibility of adversarial attacks on medical image segmentation
    Shukla, Sneha
    Gupta, Anup Kumar
    Gupta, Puneet
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (04) : 11745 - 11768
  • [10] Adversarial Attacks for Image Segmentation on Multiple Lightweight Models
    Kang, Xu
    Song, Bin
    Du, Xiaojiang
    Guizani, Mohsen
    [J]. IEEE ACCESS, 2020, 8 : 31359 - 31370