Efficient state synchronisation in model-based testing through reinforcement learning

被引:4
|
作者
Turker, Uraz Cengiz [1 ]
Hierons, Robert M. [2 ]
Mousavi, Mohammad Reza [3 ]
Tyukin, Ivan Y. [1 ]
机构
[1] Univ Leicester, Sch Comp & Math Sci, Leicester, Leics, England
[2] Univ Sheffield, Dept Comp Sci, Sheffield, S Yorkshire, England
[3] Kings Coll London, Dept Informat, London, England
关键词
Model based testing; synchronising sequence; reinforcement learning; Q-learning; RESET SEQUENCES; ALGORITHM; MACHINE; IDENTIFICATION; AUTOMATA;
D O I
10.1109/ASE51524.2021.9678566
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Model-based testing is a structured method to test complex systems. Scaling up model-based testing to large systems requires improving the efficiency of various steps involved in testcase generation and more importantly, in test-execution. One of the most costly steps of model-based testing is to bring the system to a known state, best achieved through synchronising sequences. A synchronising sequence is an input sequence that brings a given system to a predetermined state regardless of system's initial state. Depending on the structure, the system might be complete, i.e., all inputs are applicable at every state of the system. However, some systems are partial and in this case not all inputs are usable at every state. Derivation of synchronising sequences from complete or partial systems is a challenging task. In this paper, we introduce a novel Q-learning algorithm that can derive synchronising sequences from systems with complete or partial structures. The proposed algorithm is faster and can process larger systems than the fastest sequential algorithm that derives synchronising sequences from complete systems. Moreover, the proposed method is also faster and can process larger systems than the most recent massively parallel algorithm that derives synchronising sequences from partial systems. Furthermore, the proposed algorithm generates shorter synchronising sequences.
引用
收藏
页码:368 / 380
页数:13
相关论文
共 50 条
  • [41] Model-Based Reinforcement Learning For Robot Control
    Li, Xiang
    Shang, Weiwei
    Cong, Shuang
    [J]. 2020 5TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (ICARM 2020), 2020, : 300 - 305
  • [42] A Contraction Approach to Model-based Reinforcement Learning
    Fan, Ting-Han
    Ramadge, Peter J.
    [J]. 24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130 : 325 - +
  • [43] Model-Based Reinforcement Learning With Isolated Imaginations
    Pan, Minting
    Zhu, Xiangming
    Zheng, Yitao
    Wang, Yunbo
    Yang, Xiaokang
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (05) : 2788 - 2803
  • [44] Modeling Survival in model-based Reinforcement Learning
    Moazami, Saeed
    Doerschuk, Peggy
    [J]. 2020 SECOND INTERNATIONAL CONFERENCE ON TRANSDISCIPLINARY AI (TRANSAI 2020), 2020, : 17 - 24
  • [45] Adaptive Discretization for Model-Based Reinforcement Learning
    Sinclair, Sean R.
    Wang, Tianyu
    Jain, Gauri
    Banerjee, Siddhartha
    Yu, Christina Lee
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020), 2020, 33
  • [46] A comparison of direct and model-based reinforcement learning
    Atkeson, CG
    Santamaria, JC
    [J]. 1997 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION - PROCEEDINGS, VOLS 1-4, 1997, : 3557 - 3564
  • [47] Model-based average reward reinforcement learning
    Tadepalli, P
    Ok, D
    [J]. ARTIFICIAL INTELLIGENCE, 1998, 100 (1-2) : 177 - 224
  • [48] MOReL: Model-Based Offline Reinforcement Learning
    Kidambi, Rahul
    Rajeswaran, Aravind
    Netrapalli, Praneeth
    Joachims, Thorsten
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [49] Continual Model-Based Reinforcement Learning with Hypernetworks
    Huang, Yizhou
    Xie, Kevin
    Bharadhwaj, Homanga
    Shkurti, Florian
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 799 - 805
  • [50] Model-Based Reinforcement Learning in Robotics: A Survey
    Sun S.
    Lan X.
    Zhang H.
    Zheng N.
    [J]. Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, 2022, 35 (01): : 1 - 16