Robust Adversarial Attacks on Imperfect Deep Neural Networks in Fault Classification

被引:0
|
作者
Jiang, Xiaoyu [1 ,2 ]
Kong, Xiangyin [2 ]
Zheng, Junhua [3 ]
Ge, Zhiqiang [4 ]
Zhang, Xinmin [2 ]
Song, Zhihuan [1 ,2 ]
机构
[1] Guangdong Univ Petrochem Technol, Sch Automat, Maoming 525000, Peoples R China
[2] Zhejiang Univ, Coll Control Sci & Engn, State Key Lab Ind Control Technol, Hangzhou 310027, Peoples R China
[3] Zhejiang Univ Sci & Technol, Sch Automat & Elect Engn, Hangzhou 310023, Peoples R China
[4] Southeast Univ, Sch Math, Nanjing 210096, Peoples R China
基金
中国国家自然科学基金;
关键词
Data models; Artificial neural networks; Perturbation methods; Closed box; Predictive models; Iterative methods; Indexes; Adversarial attack; classification confidence score; imperfect DNNs; iterative targeted attacks; model evaluation;
D O I
10.1109/TII.2024.3449999
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, deep neural networks (DNNs) have been widely applied in fault classification tasks. Their adversarial security has received attention, but little consideration has been given to the robustness of adversarial attacks against imperfect DNNs. Owing to the data scarcity and quality deficiencies prevalent in industrial data, the performance of DNNs may be severely constrained. In addition, black-box attacks against industrial fault classification models have difficulty in obtaining sufficient and comprehensive data for constructing surrogate models with perfect decision boundaries. To address this gap, this article analyzes the outcomes of adversarial attacks on imperfect DNNs and categorizes their decision scenarios. Subsequently, building on this analysis, we propose a robust adversarial attack strategy that transforms traditional adversarial attacks into an iterative targeted attack (ITA). The ITA framework begins with an evaluation of DNNs, during which a classification confidence score (CCS) is designed. Using the CCS and the prediction probability of the data, the labels and sequences for targeted attacks are defined. The adversarial attacks are then carried out by iteratively selecting attack targets and using gradient optimization. Experimental results on both a benchmark dataset and an industrial case demonstrate the superiority of the proposed method.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] Efficacy of Defending Deep Neural Networks against Adversarial Attacks with Randomization
    Zhou, Yan
    Kantarcioglu, Murat
    Xi, Bowei
    [J]. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS II, 2020, 11413
  • [42] ARGAN: Adversarially Robust Generative Adversarial Networks for Deep Neural Networks Against Adversarial Examples
    Choi, Seok-Hwan
    Shin, Jin-Myeong
    Liu, Peng
    Choi, Yoon-Ho
    [J]. IEEE Access, 2022, 10 : 33602 - 33615
  • [43] Detect Adversarial Attacks Against Deep Neural Networks With GPU Monitoring
    Zoppi, Tommaso
    Ceccarelli, Andrea
    [J]. IEEE ACCESS, 2021, 9 : 150579 - 150591
  • [44] Towards the Development of Robust Deep Neural Networks in Adversarial Settings
    Huster, Todd P.
    Chiang, Cho-Yu Jason
    Chadha, Ritu
    Swami, Ananthram
    [J]. 2018 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2018), 2018, : 419 - 424
  • [45] Fast Training of Deep Neural Networks Robust to Adversarial Perturbations
    Goodwin, Justin
    Brown, Olivia
    Helus, Victoria
    [J]. 2020 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE (HPEC), 2020,
  • [46] Adversarial Attacks on Deep-Learning RF Classification in Spectrum Monitoring with Imperfect Bandwidth Estimation
    Chew, Daniel
    Barcklow, Daniel
    Baumgart, Chris
    Cooper, A. Brinton
    [J]. 2022 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2022, : 1152 - 1157
  • [47] Toward Robust Networks against Adversarial Attacks for Radio Signal Modulation Classification
    Manoj, B. R.
    Santos, Pablo Millan
    Sadeghi, Meysam
    Larsson, Erik G.
    [J]. 2022 IEEE 23RD INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATION (SPAWC), 2022,
  • [48] Defense against adversarial attacks: robust and efficient compressed optimized neural networks
    Insaf Kraidia
    Afifa Ghenai
    Samir Brahim Belhaouari
    [J]. Scientific Reports, 14
  • [49] Defense against adversarial attacks: robust and efficient compressed optimized neural networks
    Kraidia, Insaf
    Ghenai, Afifa
    Belhaouari, Samir Brahim
    [J]. SCIENTIFIC REPORTS, 2024, 14 (01)
  • [50] Backdoor Attacks on Image Classification Models in Deep Neural Networks
    ZHANG Quanxin
    MA Wencong
    WANG Yajie
    ZHANG Yaoyuan
    SHI Zhiwei
    LI Yuanzhang
    [J]. Chinese Journal of Electronics, 2022, 31 (02) : 199 - 212