Analyzing neural network behavior through deep statistical model checking

被引:0
|
作者
Timo P. Gros
Holger Hermanns
Jörg Hoffmann
Michaela Klauck
Marcel Steinmetz
机构
[1] Saarland University,Saarland Informatics Campus
关键词
Statistical model checking; Neural networks; Learning; Verification; Scalability;
D O I
暂无
中图分类号
学科分类号
摘要
Neural networks (NN) are taking over ever more decisions thus far taken by humans, even though verifiable system-level guarantees are far out of reach. Neither is the verification technology available, nor is it even understood what a formal, meaningful, extensible, and scalable testbed might look like for such a technology. The present paper is an attempt to improve on both the above aspects. We present a family of formal models that contain basic features of automated decision-making contexts and which can be extended with further orthogonal features, ultimately encompassing the scope of autonomous driving. Due to the possibility to model random noise in the decision actuation, each model instance induces a Markov decision process (MDP) as verification object. The NN in this context has the duty to actuate (near-optimal) decisions. From the verification perspective, the externally learnt NN serves as a determinizer of the MDP, the result being a Markov chain which as such is amenable to statistical model checking. The combination of an MDP and an NN encoding the action policy is central to what we call “deep statistical model checking” (DSMC). While being a straightforward extension of statistical model checking, it enables to gain deep insight into questions like “how high is the NN-induced safety risk?”, “how good is the NN compared to the optimal policy?” (obtained by model checking the MDP), or “does further training improve the NN?”. We report on an implementation of DSMC inside the ModestToolset in combination with externally learnt NNs, demonstrating the potential of DSMC on various instances of the model family, and illustrating its scalability as a function of instance size as well as other factors like the degree of NN training.
引用
收藏
页码:407 / 426
页数:19
相关论文
共 50 条
  • [31] A Survey of Statistical Model Checking
    Agha, Gul
    Palmskog, Karl
    ACM TRANSACTIONS ON MODELING AND COMPUTER SIMULATION, 2018, 28 (01):
  • [32] Statistical Model Checking for P
    Duran, Francisco
    Pozas, Nicolas
    Ramirez, Carlos
    Rocha, Camilo
    FORMAL METHODS FOR INDUSTRIAL CRITICAL SYSTEMS, FMICS 2023, 2023, 14290 : 40 - 56
  • [33] Analyzing interaction orderings with model checking
    Dwyer, MB
    Robby
    Tkachuk, O
    Visser, W
    19TH INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, PROCEEDINGS, 2004, : 154 - 163
  • [34] Statistical Model Checking for Hyperproperties
    Wang, Yu
    Nalluri, Siddhartha
    Bonakdarpour, Borzoo
    Pajic, Miroslav
    2021 IEEE 34TH COMPUTER SECURITY FOUNDATIONS SYMPOSIUM (CSF 2021), 2021, : 1 - 16
  • [35] On the Power of Statistical Model Checking
    Larsen, Kim G.
    Legay, Axel
    LEVERAGING APPLICATIONS OF FORMAL METHODS, VERIFICATION AND VALIDATION: DISCUSSION, DISSEMINATION, APPLICATIONS, ISOLA 2016, PT II, 2016, 9953 : 843 - 862
  • [36] On Statistical Model Checking with PLASMA
    Legay, Axel
    Sedwards, Sean
    2014 THEORETICAL ASPECTS OF SOFTWARE ENGINEERING CONFERENCE (TASE), 2014, : 139 - 145
  • [37] Statistical Model Checking: An Overview
    Legay, Axel
    Delahaye, Benoit
    Bensalem, Saddek
    RUNTIME VERIFICATION, 2010, 6418 : 122 - +
  • [38] Neural networks of motor learning: Analyzing function through behavior
    James, GA
    Tang, YY
    Liu, YJ
    INTERNATIONAL JOURNAL OF PSYCHOLOGY, 2004, 39 (5-6) : 464 - 464
  • [39] Analyzing Deep Neural Network's Transferability via Frechet Distance
    Ding, Yifan
    Wang, Liqiang
    Gong, Boqing
    2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WACV 2021, 2021, : 3931 - 3940
  • [40] Statistical insights into deep neural network learning in subspace classification
    Wu, Hao
    Fan, Yingying
    Lv, Jinchi
    STAT, 2020, 9 (01):