CardioDefense: Defending against adversarial attack in ECG classification with adversarial distillation training

被引:1
|
作者
Shao, Jiahao [1 ]
Geng, Shijia [2 ]
Fu, Zhaoji [2 ,3 ]
Xu, Weilun [4 ]
Liu, Tong [5 ]
Hong, Shenda [1 ,6 ]
机构
[1] Peking Univ, Natl Inst Hlth Data Sci, Beijing 100191, Peoples R China
[2] HeartVoice Med Technol, Hefei 230088, Peoples R China
[3] Univ Sci & Technol China, Sch Management, Hefei 230026, Peoples R China
[4] HeartRhythm Med, Beijing 100020, Peoples R China
[5] Tianjin Med Univ, Tianjin Inst Cardiol, Dept Cardiol, Tianjin Key Lab Ion Mol Funct Cardiovasc Dis,Hosp, Tianjin 300211, Peoples R China
[6] Peking Univ, Inst Med Technol, Hlth Sci Ctr, Beijing 100191, Peoples R China
关键词
Deep learning; Electrocardiograms; Adversarial training; Distillation; Adversarial attack; ELECTROCARDIOGRAMS;
D O I
10.1016/j.bspc.2023.105922
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
In clinics, doctors rely on electrocardiograms (ECGs) to assess severe cardiac disorders. Owing to the development of technology and the increase in health awareness, ECG signals are currently obtained by using medical and commercial devices. Deep neural networks (DNNs) can be used to analyze these signals because of their high accuracy rate. However, researchers have found that adversarial attacks can significantly reduce the accuracy of DNNs. Studies have been conducted to defend ECG -based DNNs against traditional adversarial attacks, such as projected gradient descent (PGD), and smooth adversarial perturbation (SAP) which targets ECG classification; however, to the best of our knowledge, no study has completely explored the defense against adversarial attacks targeting ECG classification. Thus, we did different experiments to explore the effects of defense methods against white -box adversarial attack and black -box adversarial attack targeting ECG classification, and we found that some common defense methods performed well against these attacks. Besides, we proposed a new defense method based on adversarial distillation training (named CardioDefense) which comes from defensive distillation and can effectively improve the generalization performance of DNNs. The results show that our method performed more effectively against adversarial attacks targeting on ECG classification than the other baseline methods, namely, adversarial training, defensive distillation, Jacob regularization, and noise -to -signal ratio regularization. Furthermore, we found that our method performed better against PGD attacks with low noise levels, which means that our method has stronger robustness.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] On the Effectiveness of Adversarial Training in Defending against Adversarial Example Attacks for Image Classification
    Park, Sanglee
    So, Jungmin
    [J]. APPLIED SCIENCES-BASEL, 2020, 10 (22): : 1 - 16
  • [2] Defending Against Model Inversion Attack by Adversarial Examples
    Wen, Jing
    Yiu, Siu-Ming
    Hui, Lucas C. K.
    [J]. PROCEEDINGS OF THE 2021 IEEE INTERNATIONAL CONFERENCE ON CYBER SECURITY AND RESILIENCE (IEEE CSR), 2021, : 551 - 556
  • [3] DEFENDING AGAINST UNIVERSAL ATTACK VIA CURVATURE-AWARE CATEGORY ADVERSARIAL TRAINING
    Du, Peilun
    Zheng, Xiaolong
    Liu, Liang
    Ma, Huadong
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2470 - 2474
  • [4] Diversity Adversarial Training against Adversarial Attack on Deep Neural Networks
    Kwon, Hyun
    Lee, Jun
    [J]. SYMMETRY-BASEL, 2021, 13 (03):
  • [5] Defending Against Universal Perturbations With Shared Adversarial Training
    Mummadi, Chaithanya Kumar
    Brox, Thomas
    Metzen, Jan Hendrik
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 4927 - 4936
  • [6] Robust Adversarial Watermark Defending Against GAN Synthesization Attack
    Xu, Shengwang
    Qiao, Tong
    Xu, Ming
    Wang, Wei
    Zheng, Ning
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 351 - 355
  • [7] Conditional Generative Adversarial Network-Based Image Denoising for Defending Against Adversarial Attack
    Zhang, Haibo
    Sakurai, Kouichi
    [J]. IEEE ACCESS, 2021, 9 : 169031 - 169043
  • [8] MaliFuzz: Adversarial Malware Detection Model for Defending Against Fuzzing Attack
    Xianwei Gao
    Chun Shan
    Changzhen Hu
    [J]. Journal of Beijing Institute of Technology., 2024, 33 (05) - 449
  • [9] PatchBreaker: defending against adversarial attacks by cutting-inpainting patches and joint adversarial training
    Huang, Shiyu
    Ye, Feng
    Huang, Zuchao
    Li, Wei
    Huang, Tianqiang
    Huang, Liqing
    [J]. APPLIED INTELLIGENCE, 2024, 54 (21) : 10819 - 10832
  • [10] Evidential classification for defending against adversarial attacks on network traffic
    Beechey, Matthew
    Lambotharan, Sangarapillai
    Kyriakopoulos, Konstantinos G.
    [J]. INFORMATION FUSION, 2023, 92 : 115 - 126