Robustness evaluation for deep neural networks via mutation decision boundaries analysis

被引:5
|
作者
Lin, Renhao [1 ]
Zhou, Qinglei [1 ]
Wu, Bin [1 ]
Nan, Xiaofei [1 ]
机构
[1] Zhengzhou Univ, Sch Comp & Artificial Intelligence, Zhengzhou 450001, Peoples R China
基金
中国国家自然科学基金;
关键词
Neural networks; Robustness verification; Mutation testing; Decision boundary; Unstable points; Adversarial examples;
D O I
10.1016/j.ins.2022.04.020
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
While recent years have witnessed the power of deep neural networks in representation learning, it is well known that their robustness is a congenital defect. Formal verification sheds some light to tackle this issue, which achieves it by a rigorous mathematical reasoning. Nevertheless, such technique still suffers from the efficiency and scalability problems. In light of this, we develop a novel solution to make a pre-analysis before performing verification. Specifically, we argue that the points near the actual decision boundary of the neural network are more likely to not satisfy robustness. As such, we focus on locating unstable points in the input set, instead of point-by-point verification. Borrowing from mutation testing, we adopt the analysis of the mutation decision boundaries to evaluate the local robustness of the inputs. Also, we design a robustness metric to guide the selection of unstable points. Then, the effective adversarial examples can be generated by perturbing these unstable points. We conduct extensive experiments on two neural network verification benchmarks, demonstrating the rationality, effectiveness and efficiency improvement of our solution. (C) 2022 Elsevier Inc. All rights reserved.
引用
收藏
页码:147 / 161
页数:15
相关论文
共 50 条
  • [21] Robustness Guarantees for Deep Neural Networks on Videos
    Wu, Min
    Kwiatkowska, Marta
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 308 - 317
  • [22] Robustness Verification Boosting for Deep Neural Networks
    Feng, Chendong
    [J]. 2019 6TH INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND CONTROL ENGINEERING (ICISCE 2019), 2019, : 531 - 535
  • [23] Analyzing the Noise Robustness of Deep Neural Networks
    Liu, Mengchen
    Liu, Shixia
    Su, Hang
    Cao, Kelei
    Zhu, Jun
    [J]. 2018 IEEE CONFERENCE ON VISUAL ANALYTICS SCIENCE AND TECHNOLOGY (VAST), 2018, : 60 - 71
  • [24] Adversarial robustness improvement for deep neural networks
    Charis Eleftheriadis
    Andreas Symeonidis
    Panagiotis Katsaros
    [J]. Machine Vision and Applications, 2024, 35
  • [25] SoK: Certified Robustness for Deep Neural Networks
    Li, Linyi
    Xie, Tao
    Li, Bo
    [J]. 2023 IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP, 2023, : 1289 - 1310
  • [26] ROBUSTNESS OF DEEP NEURAL NETWORKS IN ADVERSARIAL EXAMPLES
    Teng, Da
    Song, Xiao m
    Gong, Guanghong
    Han, Liang
    [J]. INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING-THEORY APPLICATIONS AND PRACTICE, 2017, 24 (02): : 123 - 133
  • [27] Impact of Colour on Robustness of Deep Neural Networks
    De, Kanjar
    Pedersen, Marius
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 21 - 30
  • [28] Adversarial robustness improvement for deep neural networks
    Eleftheriadis, Charis
    Symeonidis, Andreas
    Katsaros, Panagiotis
    [J]. MACHINE VISION AND APPLICATIONS, 2024, 35 (03)
  • [29] Analyzing the Noise Robustness of Deep Neural Networks
    Cao, Kelei
    Liu, Mengchen
    Su, Hang
    Wu, Jing
    Zhu, Jun
    Liu, Shixia
    [J]. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2021, 27 (07) : 3289 - 3304
  • [30] Robustness of deep neural networks in adversarial examples
    [J]. Song, Xiao (songxiao@buaa.edu.cn), 1600, University of Cincinnati (24):