Mitigating adversarial evasion attacks by deep active learning for medical image classification

被引:0
|
作者
Usman Ahmed
Jerry Chun-Wei Lin
Gautam Srivastava
机构
[1] Western Norway University of Applied Sciences,Department of Computer Science, Electrical Engineering and Mathematical Sciences
[2] Brandon University,Department of Mathematics & Computer Science
[3] China Medical University,Research Centre for Interneural Computing
来源
关键词
Adversarial attack; IoMT; Medical image analysis; Deep learning;
D O I
暂无
中图分类号
学科分类号
摘要
In the Internet of Medical Things (IoMT), collaboration among institutes can help complex medical and clinical analysis of disease. Deep neural networks (DNN) require training models on large, diverse patients to achieve expert clinician-level performance. Clinical studies do not contain diverse patient populations for analysis due to limited availability and scale. DNN models trained on limited datasets are thereby constraining their clinical performance upon deployment at a new hospital. Therefore, there is significant value in increasing the availability of diverse training data. This research proposes institutional data collaboration alongside an adversarial evasion method to keep the data secure. The model uses a federated learning approach to share model weights and gradients. The local model first studies the unlabeled samples classifying them as adversarial or normal. The method then uses a centroid-based clustering technique to cluster the sample images. After that, the model predicts the output of the selected images, and active learning methods are implemented to choose the sub-sample of the human annotation task. The expert within the domain takes the input and confidence score and validates the samples for the model’s training. The model re-trains on the new samples and sends the updated weights across the network for collaboration purposes. We use the InceptionV3 and VGG16 model under fabricated inputs for simulating Fast Gradient Signed Method (FGSM) attacks. The model was able to evade attacks and achieve a high accuracy rating of 95%.
引用
收藏
页码:41899 / 41910
页数:11
相关论文
共 50 条
  • [1] Mitigating adversarial evasion attacks by deep active learning for medical image classification
    Ahmed, Usman
    Lin, Jerry Chun-Wei
    Srivastava, Gautam
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (29) : 41899 - 41910
  • [2] Evasion and Causative Attacks with Adversarial Deep Learning
    Shi, Yi
    Sagduyu, Yalin E.
    [J]. MILCOM 2017 - 2017 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM), 2017, : 243 - 248
  • [3] Mitigating adversarial evasion attacks of ransomware using ensemble learning
    Ahmed, Usman
    Lin, Jerry Chun-Wei
    Srivastava, Gautam
    [J]. COMPUTERS & ELECTRICAL ENGINEERING, 2022, 100
  • [4] Adversarial Attacks on Medical Image Classification
    Tsai, Min-Jen
    Lin, Ping-Yi
    Lee, Ming-En
    [J]. CANCERS, 2023, 15 (17)
  • [5] Adversarial Deep Learning: A Survey on Adversarial Attacks and Defense Mechanisms on Image Classification
    Khamaiseh, Samer Y.
    Bagagem, Derek
    Al-Alaj, Abdullah
    Mancino, Mathew
    Alomari, Hakam W.
    [J]. IEEE ACCESS, 2022, 10 : 102266 - 102291
  • [6] MITIGATING ADVERSARIAL ATTACKS ON MEDICAL IMAGE UNDERSTANDING SYSTEMS
    Paul, Rahul
    Schabath, Matthew
    Gillies, Robert
    Hall, Lawrence
    Goldgof, Dmitry
    [J]. 2020 IEEE 17TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2020), 2020, : 1517 - 1521
  • [7] Universal adversarial attacks on deep neural networks for medical image classification
    Hokuto Hirano
    Akinori Minagi
    Kazuhiro Takemoto
    [J]. BMC Medical Imaging, 21
  • [8] Universal adversarial attacks on deep neural networks for medical image classification
    Hirano, Hokuto
    Minagi, Akinori
    Takemoto, Kazuhiro
    [J]. BMC MEDICAL IMAGING, 2021, 21 (01)
  • [9] DEEP ADVERSARIAL ACTIVE LEARNING WITH MODEL UNCERTAINTY FOR IMAGE CLASSIFICATION
    Zhu, Zheng
    Wang, Hongxing
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 1711 - 1715
  • [10] Deep Reinforcement Adversarial Learning Against Botnet Evasion Attacks
    Apruzzese, Giovanni
    Andreolini, Mauro
    Marchetti, Mirco
    Venturi, Andrea
    Colajanni, Michele
    [J]. IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2020, 17 (04): : 1975 - 1987