Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks

被引:0
|
作者
Luo, Bo [1 ]
Liu, Yannan [1 ]
Wei, Lingxiao [1 ]
Xu, Qiang [1 ]
机构
[1] Chinese Univ Hong Kong, Dept Comp Sci & Engn, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning systems based on deep neural networks, being able to produce state-of-the-art results on various perception tasks, have gained mainstream adoption in many applications. However, they are shown to be vulnerable to adversarial example attack, which generates malicious output by adding slight perturbations to the input. Previous adversarial example crafting methods, however, use simple metrics to evaluate the distances between the original examples and the adversarial ones, which could be easily detected by human eyes. In addition, these attacks are often not robust due to the inevitable noises and deviation in the physical world. In this work, we present a new adversarial example attack crafting method, which takes the human perceptual system into consideration and maximizes the noise tolerance of the crafted adversarial example. Experimental results demonstrate the efficacy of the proposed technique.
引用
收藏
页码:1652 / 1659
页数:8
相关论文
共 50 条
  • [1] Robust Heterogeneous Graph Neural Networks against Adversarial Attacks
    Zhang, Mengmei
    Wang, Xiao
    Zhu, Meiqi
    Shi, Chuan
    Zhang, Zhiqiang
    Zhou, Jun
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 4363 - 4370
  • [2] Robust Graph Neural Networks Against Adversarial Attacks via Jointly Adversarial Training
    Tian, Hu
    Ye, Bowei
    Zheng, Xiaolong
    Wu, Desheng Dash
    [J]. IFAC PAPERSONLINE, 2020, 53 (05): : 420 - 425
  • [3] Robust convolutional neural networks against adversarial attacks on medical images
    Shi, Xiaoshuang
    Peng, Yifan
    Chen, Qingyu
    Keenan, Tiarnan
    Thavikulwat, Alisa T.
    Lee, Sungwon
    Tang, Yuxing
    Chew, Emily Y.
    Summers, Ronald M.
    Lu, Zhiyong
    [J]. PATTERN RECOGNITION, 2022, 132
  • [4] Local imperceptible adversarial attacks against human pose estimation networks
    Liu, Fuchang
    Zhang, Shen
    Wang, Hao
    Yan, Caiping
    Miao, Yongwei
    [J]. VISUAL COMPUTING FOR INDUSTRY BIOMEDICINE AND ART, 2023, 6 (01)
  • [5] Local imperceptible adversarial attacks against human pose estimation networks
    Fuchang Liu
    Shen Zhang
    Hao Wang
    Caiping Yan
    Yongwei Miao
    [J]. Visual Computing for Industry, Biomedicine, and Art, 6
  • [6] Forming Adversarial Example Attacks Against Deep Neural Networks With Reinforcement Learning
    Akers, Matthew
    Barton, Armon
    [J]. COMPUTER, 2024, 57 (01) : 88 - 99
  • [7] Defense against adversarial attacks: robust and efficient compressed optimized neural networks
    Insaf Kraidia
    Afifa Ghenai
    Samir Brahim Belhaouari
    [J]. Scientific Reports, 14
  • [8] Defense against adversarial attacks: robust and efficient compressed optimized neural networks
    Kraidia, Insaf
    Ghenai, Afifa
    Belhaouari, Samir Brahim
    [J]. SCIENTIFIC REPORTS, 2024, 14 (01)
  • [9] Detecting adversarial example attacks to deep neural networks
    Carrara, Fabio
    Falchi, Fabrizio
    Caldelli, Roberto
    Amato, Giuseppe
    Fumarola, Roberta
    Becarelli, Rudy
    [J]. PROCEEDINGS OF THE 15TH INTERNATIONAL WORKSHOP ON CONTENT-BASED MULTIMEDIA INDEXING (CBMI), 2017,
  • [10] Robust Android Malware Detection against Adversarial Example Attacks
    Li, Heng
    Zhou, Shiyao
    Yuan, Wei
    Luo, Xiapu
    Gao, Cuiying
    Chen, Shuiyan
    [J]. PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2021 (WWW 2021), 2021, : 3603 - 3612