Unveiling the veil: high-frequency components as the key to understanding medical DNNs’ vulnerability to adversarial examples

被引:0
|
作者
Yaguan Qian [1 ]
Renhui Tao [1 ]
Huabin Du [1 ]
Bin Wang [2 ]
机构
[1] Zhejiang University of Science and Technology,School of Science
[2] Zhejiang Key Laboratory of Artificial Intelligence of Things (AIoT) Network and Data Security,undefined
关键词
Adversarial example; Fourier analysis; High-frequency component; Low-frequency component;
D O I
10.1186/s42400-024-00330-9
中图分类号
学科分类号
摘要
Deep Neural Networks (DNNs) have demonstrated outstanding performance in various medical image processing tasks. However, recent studies have revealed a heightened vulnerability of medical DNNs to adversarial attacks compared to their natural counterparts. In this work, we present a novel perspective by analyzing the disparities between medical datasets and natural datasets, specifically focusing on the dataset collection process. Our analysis uncovers unique differences in the data distribution across different image classes in medical datasets, a phenomenon absent in natural datasets. To gain deeper insights into medical datasets, we employ Fourier analysis tools to investigate medical DNNs. Intriguingly, we discover that high-frequency components in medical images exhibit stronger associations with corresponding labels compared to those in natural datasets. These high-frequency components distract the attention of medical DNNs, rendering them more susceptible to adversarial images. To mitigate this vulnerability, we propose a preprocessing technique called Removing High-frequency Components (RH) training. Our experimental results demonstrate that the application of RH training significantly enhances the robustness of medical DNNs against adversarial attacks. Notably, in certain scenarios, RH training even outperforms traditional adversarial training methods, particularly when subjected to black-box attacks.
引用
收藏
相关论文
共 3 条
  • [1] Automated Detection System for Adversarial Examples with High-Frequency Noises Sieve
    Dang Duy Thang
    Matsui, Toshihiro
    CYBERSPACE SAFETY AND SECURITY, PT I, 2020, 11982 : 348 - 362
  • [2] High frequency patterns play a key role in the generation of adversarial examples
    Zhou, Yue
    Hu, Xiaofang
    Han, Jiaqi
    Wang, Lidan
    Duan, Shukai
    NEUROCOMPUTING, 2021, 459 : 131 - 141
  • [3] Conditional Generative Adversarial Network Aided Iron Loss Prediction for High-Frequency Magnetic Components
    Shen, Xiaobing
    Zuo, Yu
    Martinez, Wilmar
    IEEE TRANSACTIONS ON POWER ELECTRONICS, 2024, 39 (08) : 9953 - 9964