Excitement surfeited turns to errors: Deep learning testing framework based on excitable neurons

被引:2
|
作者
Jin, Haibo [1 ]
Chen, Ruoxi [1 ]
Zheng, Haibin [1 ,2 ]
Chen, Jinyin [1 ,2 ]
Cheng, Yao [3 ]
Yu, Yue [4 ]
Chen, Tieming [5 ]
Liu, Xianglong [6 ]
机构
[1] Zhejiang Univ Technol, Coll Informat Engn, Hangzhou, Peoples R China
[2] Zhejiang Univ Technol, Inst Cyberspace Secur, Hangzhou, Peoples R China
[3] Huawei Int, Singapore, Singapore
[4] Natl Univ Def Technol, Coll Comp, Natl Lab Parallel & Distributed Proc, Changsha, Peoples R China
[5] Zhejiang Univ Technol, Coll Comp Sci & Technol, Hangzhou, Peoples R China
[6] Beihang Univ, State Key Lab Software Dev Environm, Beijing, Peoples R China
关键词
Deep neural networks; Deep learning testing; Cooperative game theory; Excitable neurons; ROBUSTNESS;
D O I
10.1016/j.ins.2023.118936
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Despite impressive capabilities and outstanding performance, deep neural networks (DNNs) have become a growing concern among the public due to their frequently-appearing erroneous behaviors. As a result, it is urgent to test the security issues with DNNs systematically before they are deployed to the real world. Fine-grained metrics based on neuron coverage have been provided by existing testing approaches, and various approaches have been proposed to enhance these metrics. However, it has been gradually realized that a higher neuron coverage does not necessarily represent better capabilities in identifying defects that lead to errors. Besides, coverage-guided methods cannot hunt errors due to faulty training procedure. So the robustness improvement of DNNs via retraining by these testing examples are unsatisfactory. To tackle this challenge, we introduce the concept of excitable neurons based on Shapley value and design a white-box testing framework for DNNs, namely DeepSensor. It is motivated by our observation that neurons with larger responsibility towards model loss changes due to small perturbations are more likely related to incorrect corner cases due to potential defects. By maximizing the number of excitable neurons that correspond to various incorrect behaviors of models, DeepSensor can generate testing examples that effectively trigger more erroneous security issues caused by malicious inputs (both adversarial and polluted data) and incomplete training. Extensive experiments implemented on both image classification models and speaker recognition models have demonstrated the superiority of DeepSensor. Compared to the state-of-the-art testing methods, DeepSensor has the ability to find more wrong model behaviors due to malicious inputs (i.e., similar to x1.2 for adversarial and similar to x4.7 for polluted data) and incompletely-trained DNNs. Additionally, it can help DNNs build larger l(2)-norm robustness bound (similar to x3) via retraining according to CLEVER's certification. Furthermore, we provide interpretable certifications for effectiveness of DeepSensor by identifying excitable neurons and visualizations via t-SNE. The open source code of DeepSensor can be downloaded at https://github.com/Allen-piexl/DeepSensor/.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] An intelligent cocoa quality testing framework based on deep learning techniques
    Essah R.
    Anand D.
    Singh S.
    Measurement: Sensors, 2022, 24
  • [2] DEVIATE: A Deep Learning Variance Testing Framework
    Pham, Hung Viet
    Kim, Mijung
    Tan, Lin
    Yu, Yaoliang
    Nagappan, Nachiappan
    2021 36TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING ASE 2021, 2021, : 1286 - 1290
  • [3] Validating a Deep Learning Framework by Metamorphic Testing
    Ding, Junhua
    Kang, Xiaojun
    Hu, Xin-Hua
    2017 IEEE/ACM 2ND INTERNATIONAL WORKSHOP ON METAMORPHIC TESTING (MET 2017), 2017, : 28 - 34
  • [4] A deep learning-based automated framework for functional User Interface testing
    Khaliq, Zubair
    Farooq, Sheikh Umar
    Khan, Dawood Ashraf
    INFORMATION AND SOFTWARE TECHNOLOGY, 2022, 150
  • [5] A deep learning-based automated framework for functional User Interface testing
    Khaliq, Zubair
    Farooq, Sheikh Umar
    Khan, Dawood Ashraf
    INFORMATION AND SOFTWARE TECHNOLOGY, 2022, 150
  • [6] Mutation-Based Deep Learning Framework Testing Method in JavaScript Environment
    Zou, Yinglong
    Liu, Jiawei
    Zhai, Juan
    Zheng, Tao
    Fang, Chunrong
    Chen, Zhenyu
    arXiv,
  • [7] A New Perspective of Deep Learning Testing Framework: Human-Computer Interaction Based Neural Network Testing
    Kong, Wei
    Li, Hu
    Du, Qianjin
    Cao, Huayang
    Kuang, Xiaohui
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2024), 2024, : 16299 - 16305
  • [8] EEG-Based Excitement Detection in Immersive Environments: An Improved Deep Learning Approach
    Teo, Jason
    Chia, Jia Tian
    PROCEEDINGS OF THE 3RD INTERNATIONAL CONFERENCE ON APPLIED SCIENCE AND TECHNOLOGY (ICAST'18), 2018, 2016
  • [9] DEEPJUDGE: A Testing Framework for Copyright Protection of Deep Learning Models
    Chen, Jialuo
    Sun, Youcheng
    Wang, Jingyi
    Cheng, Peng
    Ma, Xingjun
    2023 IEEE/ACM 45TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: COMPANION PROCEEDINGS, ICSE-COMPANION, 2023, : 64 - 67
  • [10] A Deep Learning-based Penetration Testing Framework for Vulnerability Identification in Internet of Things Environments
    Koroniotis, Nickolaos
    Moustafa, Nour
    Turnbull, Benjamin
    Schiliro, Francesco
    Gauravaram, Praveen
    Janicke, Helge
    2021 IEEE 20TH INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS (TRUSTCOM 2021), 2021, : 887 - 894