Efficient state synchronisation in model-based testing through reinforcement learning

被引:4
|
作者
Turker, Uraz Cengiz [1 ]
Hierons, Robert M. [2 ]
Mousavi, Mohammad Reza [3 ]
Tyukin, Ivan Y. [1 ]
机构
[1] Univ Leicester, Sch Comp & Math Sci, Leicester, Leics, England
[2] Univ Sheffield, Dept Comp Sci, Sheffield, S Yorkshire, England
[3] Kings Coll London, Dept Informat, London, England
关键词
Model based testing; synchronising sequence; reinforcement learning; Q-learning; RESET SEQUENCES; ALGORITHM; MACHINE; IDENTIFICATION; AUTOMATA;
D O I
10.1109/ASE51524.2021.9678566
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Model-based testing is a structured method to test complex systems. Scaling up model-based testing to large systems requires improving the efficiency of various steps involved in testcase generation and more importantly, in test-execution. One of the most costly steps of model-based testing is to bring the system to a known state, best achieved through synchronising sequences. A synchronising sequence is an input sequence that brings a given system to a predetermined state regardless of system's initial state. Depending on the structure, the system might be complete, i.e., all inputs are applicable at every state of the system. However, some systems are partial and in this case not all inputs are usable at every state. Derivation of synchronising sequences from complete or partial systems is a challenging task. In this paper, we introduce a novel Q-learning algorithm that can derive synchronising sequences from systems with complete or partial structures. The proposed algorithm is faster and can process larger systems than the fastest sequential algorithm that derives synchronising sequences from complete systems. Moreover, the proposed method is also faster and can process larger systems than the most recent massively parallel algorithm that derives synchronising sequences from partial systems. Furthermore, the proposed algorithm generates shorter synchronising sequences.
引用
收藏
页码:368 / 380
页数:13
相关论文
共 50 条
  • [1] Efficient hyperparameter optimization through model-based reinforcement learning
    Wu, Jia
    Chen, SenPeng
    Liu, XiYuan
    [J]. NEUROCOMPUTING, 2020, 409 : 381 - 393
  • [2] Efficient Model-Based Deep Reinforcement Learning with Variational State Tabulation
    Corneil, Dane
    Gerstner, Wulfram
    Brea, Johanni
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [3] An Efficient Approach to Model-Based Hierarchical Reinforcement Learning
    Li, Zhuoru
    Narayan, Akshay
    Leong, Tze-Yun
    [J]. THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 3583 - 3589
  • [4] Efficient reinforcement learning: Model-based acrobot control
    Boone, G
    [J]. 1997 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION - PROCEEDINGS, VOLS 1-4, 1997, : 229 - 234
  • [5] Efficient Model-Based Concave Utility Reinforcement Learning through Greedy Mirror Descent
    Moreno, Bianca Marin
    Bregere, Margaux
    Gaillard, Pierre
    Oudjane, Nadia
    [J]. INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [6] Efficient hyperparameters optimization through model-based reinforcement learning with experience exploiting and meta-learning
    Liu, Xiyuan
    Wu, Jia
    Chen, Senpeng
    [J]. SOFT COMPUTING, 2023, 27 (13) : 8661 - 8678
  • [7] Efficient hyperparameters optimization through model-based reinforcement learning with experience exploiting and meta-learning
    Xiyuan Liu
    Jia Wu
    Senpeng Chen
    [J]. Soft Computing, 2023, 27 : 8661 - 8678
  • [8] Model-based reinforcement learning in factored-state MDPs
    Strehl, Alexander L.
    [J]. 2007 IEEE INTERNATIONAL SYMPOSIUM ON APPROXIMATE DYNAMIC PROGRAMMING AND REINFORCEMENT LEARNING, 2007, : 103 - 110
  • [9] Abstract State Transition Graphs for Model-Based Reinforcement Learning
    Mendonca, Matheus R. F.
    Ziviani, Artur
    Barreto, Andre M. S.
    [J]. 2018 7TH BRAZILIAN CONFERENCE ON INTELLIGENT SYSTEMS (BRACIS), 2018, : 115 - 120
  • [10] Improving Model-Based Reinforcement Learning with Internal State Representations through Self-Supervision
    Scholz, Julien
    Weber, Cornelius
    Hafez, Muhammad Burhan
    Wermter, Stefan
    [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,