Deep learning framework testing via hierarchical and heuristic model generation

被引:3
|
作者
Zou, Yinglong
Sun, Haofeng
Fang, Chunrong [1 ]
Liu, Jiawei
Zhang, Zhenping
机构
[1] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing 210093, Peoples R China
基金
中国国家自然科学基金;
关键词
Software testing; Deep learning framework; Hierarchical and heuristic model; generation; Precision bug;
D O I
10.1016/j.jss.2023.111681
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Deep learning frameworks are the foundation of deep learning model construction and inference. Many testing methods using deep learning models as test inputs are proposed to ensure the quality of deep learning frameworks. However, there are still critical challenges in model generation, model instantiation, and result analysis. To bridge the gap, we propose Ramos, a hierarchical heuristic deep learning framework testing method. To generate diversified models, we design a novel hierarchical structure to represent the building block of the model. Based on this structure, new models generated by the mutation method. To trigger more precision bugs in deep learning frameworks, design a heuristic method to increase the error triggered by models and guide the subsequent model generation. To reduce false positives, we propose an API mapping rule between different frameworks to aid model instantiation. Further, we design different test oracles for crashes and precision bugs respectively. We conduct experiments under three widely-used frameworks (TensorFlow, PyTorch, and MindSpore) to evaluate the effectiveness of Ramos. The results show that Ramos can effectively generate diversified models and detect more deep learning framework bugs, including crashes and precision bugs, with fewer false positives. Additionally, 14 of 15 are confirmed by developers.(c) 2023 Elsevier Inc. All rights reserved.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Deep Learning Library Testing via Effective Model Generation
    Wang, Zan
    Yan, Ming
    Chen, Junjie
    Liu, Shuang
    Zhang, Dongdi
    PROCEEDINGS OF THE 28TH ACM JOINT MEETING ON EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING (ESEC/FSE '20), 2020, : 788 - 799
  • [2] Vulnerability Mining of Deep Learning Framework for Model Generation Guided by Reinforcement Learning
    Pan L.
    Liu L.
    Luo S.
    Zhang Z.
    Beijing Ligong Daxue Xuebao/Transaction of Beijing Institute of Technology, 2024, 44 (05): : 521 - 529
  • [3] A Deep Learning Based Framework for Textual Requirement Analysis and Model Generation
    Qie, Yongjun
    Zhu, Weijie
    Liu, Aishan
    Zhang, Yuchen
    Wang, Jun
    Li, Teng
    Li, Yaqing
    Ge, Yufei
    Wang, Yufeng
    2018 IEEE CSAA GUIDANCE, NAVIGATION AND CONTROL CONFERENCE (CGNCC), 2018,
  • [4] Hierarchical Face Parsing via Deep Learning
    Luo, Ping
    Wang, Xiaogang
    Tang, Xiaoou
    2012 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2012, : 2480 - 2487
  • [5] HIERARCHICAL CACHING VIA DEEP REINFORCEMENT LEARNING
    Sadeghi, Alireza
    Wang, Gang
    Giannakis, Georgios B.
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 3532 - 3536
  • [6] A hierarchical deep reinforcement learning model with expert prior knowledge for intelligent penetration testing
    Li, Qianyu
    Zhang, Min
    Shen, Yi
    Wang, Ruipeng
    Hu, Miao
    Li, Yang
    Hao, Hao
    COMPUTERS & SECURITY, 2023, 132
  • [7] Hierarchical Distribution-aware Testing of Deep Learning
    Huang, Wei
    Zhao, Xingyu
    Banks, Alec
    Cox, Victoria
    Huang, Xiaowei
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2024, 33 (02)
  • [8] TeDA: A Testing Framework for Data Usage Auditing in Deep Learning Model Development
    Gao, Xiangshan
    Chen, Jialuo
    Wang, Jingyi
    Shi, Jie
    Cheng, Peng
    Chen, Jiming
    PROCEEDINGS OF THE 33RD ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2024, 2024, : 1479 - 1490
  • [9] DEVIATE: A Deep Learning Variance Testing Framework
    Pham, Hung Viet
    Kim, Mijung
    Tan, Lin
    Yu, Yaoliang
    Nagappan, Nachiappan
    2021 36TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING ASE 2021, 2021, : 1286 - 1290
  • [10] Validating a Deep Learning Framework by Metamorphic Testing
    Ding, Junhua
    Kang, Xiaojun
    Hu, Xin-Hua
    2017 IEEE/ACM 2ND INTERNATIONAL WORKSHOP ON METAMORPHIC TESTING (MET 2017), 2017, : 28 - 34