Efficient state synchronisation in model-based testing through reinforcement learning

被引:4
|
作者
Turker, Uraz Cengiz [1 ]
Hierons, Robert M. [2 ]
Mousavi, Mohammad Reza [3 ]
Tyukin, Ivan Y. [1 ]
机构
[1] Univ Leicester, Sch Comp & Math Sci, Leicester, Leics, England
[2] Univ Sheffield, Dept Comp Sci, Sheffield, S Yorkshire, England
[3] Kings Coll London, Dept Informat, London, England
关键词
Model based testing; synchronising sequence; reinforcement learning; Q-learning; RESET SEQUENCES; ALGORITHM; MACHINE; IDENTIFICATION; AUTOMATA;
D O I
10.1109/ASE51524.2021.9678566
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Model-based testing is a structured method to test complex systems. Scaling up model-based testing to large systems requires improving the efficiency of various steps involved in testcase generation and more importantly, in test-execution. One of the most costly steps of model-based testing is to bring the system to a known state, best achieved through synchronising sequences. A synchronising sequence is an input sequence that brings a given system to a predetermined state regardless of system's initial state. Depending on the structure, the system might be complete, i.e., all inputs are applicable at every state of the system. However, some systems are partial and in this case not all inputs are usable at every state. Derivation of synchronising sequences from complete or partial systems is a challenging task. In this paper, we introduce a novel Q-learning algorithm that can derive synchronising sequences from systems with complete or partial structures. The proposed algorithm is faster and can process larger systems than the fastest sequential algorithm that derives synchronising sequences from complete systems. Moreover, the proposed method is also faster and can process larger systems than the most recent massively parallel algorithm that derives synchronising sequences from partial systems. Furthermore, the proposed algorithm generates shorter synchronising sequences.
引用
收藏
页码:368 / 380
页数:13
相关论文
共 50 条
  • [21] Nonparametric model-based reinforcement learning
    Atkeson, CG
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 10, 1998, 10 : 1008 - 1014
  • [22] Model-based Reinforcement Learning: A Survey
    Moerland, Thomas M.
    Broekens, Joost
    Plaat, Aske
    Jonker, Catholijn M.
    [J]. FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2023, 16 (01): : 1 - 118
  • [23] A survey on model-based reinforcement learning
    Fan-Ming LUO
    Tian XU
    Hang LAI
    Xiong-Hui CHEN
    Weinan ZHANG
    Yang YU
    [J]. Science China(Information Sciences), 2024, 67 (02) : 59 - 84
  • [24] Multiple model-based reinforcement learning
    Doya, K
    Samejima, K
    Katagiri, K
    Kawato, M
    [J]. NEURAL COMPUTATION, 2002, 14 (06) : 1347 - 1369
  • [25] A survey on model-based reinforcement learning
    Luo, Fan-Ming
    Xu, Tian
    Lai, Hang
    Chen, Xiong-Hui
    Zhang, Weinan
    Yu, Yang
    [J]. SCIENCE CHINA-INFORMATION SCIENCES, 2024, 67 (02)
  • [26] Model-Based Reinforcement Learning Exploiting State-Action Equivalence
    Asadi, Mahsa
    Talebi, Mohammad Sadegh
    Bourel, Hippolyte
    Maillard, Odalric-Ambrym
    [J]. ASIAN CONFERENCE ON MACHINE LEARNING, VOL 101, 2019, 101 : 204 - 219
  • [27] Distributionally Robust Model-based Reinforcement Learning with Large State Spaces
    Ramesh, Shyam Sundhar
    Sessa, Pier Giuseppe
    Hu, Yifan
    Krause, Andreas
    Bogunovic, Ilija
    [J]. INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [28] MODEL-BASED SECURITY ANALYSIS OF FPGA DESIGNS THROUGH REINFORCEMENT LEARNING
    Vetter, Michael
    [J]. ACTA POLYTECHNICA, 2019, 59 (05) : 518 - 526
  • [29] Certifiably Robust Reinforcement Learning through Model-Based Abstract Interpretation
    Yang, Chenxi
    Anderson, Greg
    Chaudhuri, Swarat
    [J]. IEEE CONFERENCE ON SAFE AND TRUSTWORTHY MACHINE LEARNING, SATML 2024, 2024, : 233 - 251
  • [30] Model Learning and Model-Based Testing
    Aichernig, Bernhard K.
    Mostowski, Wojciech
    Mousavi, Mohammad Reza
    Tappler, Martin
    Taromirad, Masoumeh
    [J]. MACHINE LEARNING FOR DYNAMIC SOFTWARE ANALYSIS: POTENTIALS AND LIMITS, 2018, 11026 : 74 - 100