Adversarial Attacks and Defense on Deep Learning Classification Models using YCbCr Color Images

被引:3
|
作者
Pestana, Camilo [1 ]
Akhtar, Naveed [1 ]
Liu, Wei [1 ]
Glance, David [1 ]
Mian, Ajmal [1 ]
机构
[1] Univ Western Australia, Dept Comp Sci, 35 Stirling Hwy, Crawley, WA 6009, Australia
关键词
adversarial attacks; adversarial defense; YCbCr; SUPERRESOLUTION;
D O I
10.1109/IJCNN52387.2021.9533495
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural network models are vulnerable to adversarial perturbations that are subtle but change the model predictions. Adversarial perturbations are generally computed for RGB images and are, hence, equally distributed among the RGB channels. We show, for the first time, that adversarial perturbations prevail in the Y-channel of the YCbCr color space and exploit this finding to propose a defense mechanism. Our defense ResUpNet, which is end-to-end trainable, removes perturbations only from the Y-channel by exploiting ResNet features in a bottleneck free up-sampling framework. The refined Y-channel is combined with the untouched CbCr-channels to restore the clean image. We compare ResUpNet to existing defenses in the input transformation category and show that it achieves the best balance between maintaining the original accuracies on clean images and defense against adversarial attacks. Finally, we show that for the same attack and fixed perturbation magnitude, learning perturbations only in the Y-channel results in higher fooling rates. For example, with a very small perturbation magnitude epsilon = 0.002, the fooling rates of FGSM and PGD attacks on the ResNet50 model increase by 11.1% and 15.6% respectively, when the perturbations are learned only for the Y-channel.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Adversarial Deep Learning: A Survey on Adversarial Attacks and Defense Mechanisms on Image Classification
    Khamaiseh, Samer Y.
    Bagagem, Derek
    Al-Alaj, Abdullah
    Mancino, Mathew
    Alomari, Hakam W.
    [J]. IEEE ACCESS, 2022, 10 : 102266 - 102291
  • [2] Defense Against Adversarial Attacks in Deep Learning
    Li, Yuancheng
    Wang, Yimeng
    [J]. APPLIED SCIENCES-BASEL, 2019, 9 (01):
  • [3] A Detailed Study on Adversarial attacks and Defense Mechanisms on various Deep Learning Models
    Priya, K., V
    Dinesh, Peter J.
    [J]. 2023 ADVANCED COMPUTING AND COMMUNICATION TECHNOLOGIES FOR HIGH PERFORMANCE APPLICATIONS, ACCTHPA, 2023,
  • [4] Deep Learning Defense Method Against Adversarial Attacks
    Wang, Ling
    Zhang, Cheng
    Liu, Jie
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2020, : 3667 - 3671
  • [5] Adversarial Attacks and Defenses for Deep Learning Models
    Li, Minghui
    Jiang, Peipei
    Wang, Qian
    Shen, Chao
    Li, Qi
    [J]. Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2021, 58 (05): : 909 - 926
  • [6] Assured Deep Learning: Practical Defense Against Adversarial Attacks
    Rouhani, Bita Darvish
    Samragh, Mohammad
    Javaheripi, Mojan
    Javidi, Tara
    Koushanfar, Farinaz
    [J]. 2018 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD) DIGEST OF TECHNICAL PAPERS, 2018,
  • [7] Defending Deep Learning Models Against Adversarial Attacks
    Mani, Nag
    Moh, Melody
    Moh, Teng-Sheng
    [J]. INTERNATIONAL JOURNAL OF SOFTWARE SCIENCE AND COMPUTATIONAL INTELLIGENCE-IJSSCI, 2021, 13 (01): : 72 - 89
  • [8] Adversarial attacks on deep learning models in smart grids
    Hao, Jingbo
    Tao, Yang
    [J]. ENERGY REPORTS, 2022, 8 : 123 - 129
  • [9] Adversarial Attacks and Defense on an Aircraft Classification Model Using a Generative Adversarial Network
    Colter, Jamison
    Kinnison, Matthew
    Henderson, Alex
    Harbour, Steven
    [J]. 2023 IEEE/AIAA 42ND DIGITAL AVIONICS SYSTEMS CONFERENCE, DASC, 2023,
  • [10] Natural Images Allow Universal Adversarial Attacks on Medical Image Classification Using Deep Neural Networks with Transfer Learning
    Minagi, Akinori
    Hirano, Hokuto
    Takemoto, Kauzhiro
    [J]. JOURNAL OF IMAGING, 2022, 8 (02)