Adversarial attack on deep learning-based dermatoscopic image recognition systems Risk of misdiagnosis due to undetectable image perturbations

被引:10
|
作者
Allyn, Jerome [1 ,2 ]
Allou, Nicolas [1 ,2 ]
Vidal, Charles [1 ]
Renou, Amelie [1 ]
Ferdynus, Cyril [2 ,3 ,4 ]
机构
[1] St Denis Univ Hosp, Intens Care Unit, St Denis, Reunion Island, France
[2] St Denis Univ Hosp, Clin Informat Dept, St Denis, Reunion Island, France
[3] St Denis Univ Hosp, Methodol Support Unit, St Denis, Reunion Island, France
[4] INSERM, CIC 1410, F-97410 St Pierre, Reunion, France
关键词
adversarial attack; artificial intelligence; deep learning; dermatoscopic lesions; image recognition systems; DIABETIC-RETINOPATHY; HEALTH-CARE; VALIDATION; DIAGNOSIS;
D O I
10.1097/MD.0000000000023568
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Deep learning algorithms have shown excellent performances in the field of medical image recognition, and practical applications have been made in several medical domains. Little is known about the feasibility and impact of an undetectable adversarial attacks, which can disrupt an algorithm by modifying a single pixel of the image to be interpreted. The aim of the study was to test the feasibility and impact of an adversarial attack on the accuracy of a deep learning-based dermatoscopic image recognition system. First, the pre-trained convolutional neural network DenseNet-201 was trained to classify images from the training set into 7 categories. Second, an adversarial neural network was trained to generate undetectable perturbations on images from the test set, to classifying all perturbed images as melanocytic nevi. The perturbed images were classified using the model generated in the first step. This study used the HAM-10000 dataset, an open source image database containing 10,015 dermatoscopic images, which was split into a training set and a test set. The accuracy of the generated classification model was evaluated using images from the test set. The accuracy of the model with and without perturbed images was compared. The ability of 2 observers to detect image perturbations was evaluated, and the inter observer agreement was calculated. The overall accuracy of the classification model dropped from 84% (confidence interval (CI) 95%: 82-86) for unperturbed images to 67% (CI 95%: 65-69) for perturbed images (Mc Nemar test, P < .0001). The fooling ratio reached 100% for all categories of skin lesions. Sensitivity and specificity of the combined observers calculated on a random sample of 50 images were 58.3% (CI 95%: 45.9-70.8) and 42.5% (CI 95%: 27.2-57.8), respectively. The kappa agreement coefficient between the 2 observers was negative at -0.22 (CI 95%: -0.49--0.04). Adversarial attacks on medical image databases can distort interpretation by image recognition algorithms, are easy to make and undetectable by humans. It seems essential to improve our understanding of deep learning-based image recognition systems and to upgrade their security before putting them to practical and daily use.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] A Comprehensive Review and Analysis of Deep Learning-Based Medical Image Adversarial Attack and Defense
    Muoka, Gladys W.
    Yi, Ding
    Ukwuoma, Chiagoziem C.
    Mutale, Albert
    Ejiyi, Chukwuebuka J.
    Mzee, Asha Khamis
    Gyarteng, Emmanuel S. A.
    Alqahtani, Ali
    Al-antari, Mugahed A.
    [J]. MATHEMATICS, 2023, 11 (20)
  • [2] Deep Learning-based Weather Image Recognition
    Kang, Li-Wei
    Chou, Ke-Lin
    Fu, Ru-Hong
    [J]. 2018 INTERNATIONAL SYMPOSIUM ON COMPUTER, CONSUMER AND CONTROL (IS3C 2018), 2018, : 384 - 387
  • [3] Similarity attack: An adversarial attack game for image classification based on deep learning
    Tian, Xuejun
    Tian, Xinyuan
    Pan, Bingqin
    [J]. JOURNAL OF COMPUTATIONAL METHODS IN SCIENCES AND ENGINEERING, 2023, 23 (03) : 1467 - 1478
  • [4] Deep learning-based image recognition for autonomous driving
    Fujiyoshi, Hironobu
    Hirakawa, Tsubasa
    Yamashita, Takayoshi
    [J]. IATSS RESEARCH, 2019, 43 (04) : 244 - 252
  • [5] Deep Learning-Based Image Recognition of Agricultural Pests
    Xu, Weixiao
    Sun, Lin
    Zhen, Cheng
    Liu, Bo
    Yang, Zhengyi
    Yang, Wenke
    [J]. APPLIED SCIENCES-BASEL, 2022, 12 (24):
  • [6] Deep learning-based garbage image recognition algorithm
    Yuefei Li
    Wei Liu
    [J]. Applied Nanoscience, 2023, 13 : 1415 - 1424
  • [7] Deep learning-based garbage image recognition algorithm
    Li, Yuefei
    Liu, Wei
    [J]. APPLIED NANOSCIENCE, 2021, 13 (2) : 1415 - 1424
  • [8] SUSCEPTIBILITY TO MISDIAGNOSIS OF ADVERSARIAL IMAGES BY DEEP LEARNING BASED RETINAL IMAGE ANALYSIS ALGORITHMS
    Shah, Abhay
    Lynch, Stephanie
    Niemeijer, Meindert
    Amelon, Ryan
    Clarida, Warren
    Folk, James
    Russell, Stephen
    Wu, Xiaodong
    Abramoff, Michael D.
    [J]. 2018 IEEE 15TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2018), 2018, : 1454 - 1457
  • [9] Adversarial Attack on Deep Learning-Based Splice Localization
    Rozsa, Andras
    Zhong, Zheng
    Boult, Terrance E.
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 2757 - 2765
  • [10] Advances in deep learning-based image recognition of product packaging
    Chen, Siyuan
    Liu, Danfei
    Pu, Yumei
    Zhong, Yunfei
    [J]. Image and Vision Computing, 2022, 128