Efficient state synchronisation in model-based testing through reinforcement learning

被引:4
|
作者
Turker, Uraz Cengiz [1 ]
Hierons, Robert M. [2 ]
Mousavi, Mohammad Reza [3 ]
Tyukin, Ivan Y. [1 ]
机构
[1] Univ Leicester, Sch Comp & Math Sci, Leicester, Leics, England
[2] Univ Sheffield, Dept Comp Sci, Sheffield, S Yorkshire, England
[3] Kings Coll London, Dept Informat, London, England
关键词
Model based testing; synchronising sequence; reinforcement learning; Q-learning; RESET SEQUENCES; ALGORITHM; MACHINE; IDENTIFICATION; AUTOMATA;
D O I
10.1109/ASE51524.2021.9678566
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Model-based testing is a structured method to test complex systems. Scaling up model-based testing to large systems requires improving the efficiency of various steps involved in testcase generation and more importantly, in test-execution. One of the most costly steps of model-based testing is to bring the system to a known state, best achieved through synchronising sequences. A synchronising sequence is an input sequence that brings a given system to a predetermined state regardless of system's initial state. Depending on the structure, the system might be complete, i.e., all inputs are applicable at every state of the system. However, some systems are partial and in this case not all inputs are usable at every state. Derivation of synchronising sequences from complete or partial systems is a challenging task. In this paper, we introduce a novel Q-learning algorithm that can derive synchronising sequences from systems with complete or partial structures. The proposed algorithm is faster and can process larger systems than the fastest sequential algorithm that derives synchronising sequences from complete systems. Moreover, the proposed method is also faster and can process larger systems than the most recent massively parallel algorithm that derives synchronising sequences from partial systems. Furthermore, the proposed algorithm generates shorter synchronising sequences.
引用
收藏
页码:368 / 380
页数:13
相关论文
共 50 条
  • [31] Continual Model-Based Reinforcement Learning for Data Efficient Wireless Network Optimisation
    Hasan, Cengis
    Agapitos, Alexandros
    Lynch, David
    Castagna, Alberto
    Cruciata, Giorgio
    Wang, Hao
    Milenovic, Aleksandar
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: APPLIED DATA SCIENCE AND DEMO TRACK, ECML PKDD 2023, PT VI, 2023, 14174 : 295 - 311
  • [32] Combining Pessimism with Optimism for Robust and Efficient Model-Based Deep Reinforcement Learning
    Curi, Sebastian
    Bogunovic, Ilija
    Krause, Andreas
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [33] Pre-training with Augmentations for Efficient Transfer in Model-Based Reinforcement Learning
    Esteves, Bernardo
    Vasco, Miguel
    Melo, Francisco S.
    [J]. PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2023, PT I, 2023, 14115 : 133 - 145
  • [34] Efficient model-based bioequivalence testing
    Moellenhoff, Kathrin
    Loingeville, Florence
    Bertrand, Julie
    Nguyen, Thu Thuy
    Sharan, Satish
    Zhao, Liang
    Fang, Lanyan
    Sun, Guoying
    Grosser, Stella
    Mentre, France
    Dette, Holger
    [J]. BIOSTATISTICS, 2022, 23 (01) : 314 - 327
  • [35] Learning to Paint With Model-based Deep Reinforcement Learning
    Huang, Zhewei
    Heng, Wen
    Zhou, Shuchang
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 8708 - 8717
  • [36] On Effective Scheduling of Model-based Reinforcement Learning
    Lai, Hang
    Shen, Jian
    Zhang, Weinan
    Huang, Yimin
    Zhang, Xing
    Tang, Ruiming
    Yu, Yong
    Li, Zhenguo
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [37] Model-based reinforcement learning with dimension reduction
    Tangkaratt, Voot
    Morimoto, Jun
    Sugiyama, Masashi
    [J]. NEURAL NETWORKS, 2016, 84 : 1 - 16
  • [38] Objective Mismatch in Model-based Reinforcement Learning
    Lambert, Nathan
    Amos, Brandon
    Yadan, Omry
    Calandra, Roberto
    [J]. LEARNING FOR DYNAMICS AND CONTROL, VOL 120, 2020, 120 : 761 - 770
  • [39] Model-based reinforcement learning in a complex domain
    Kalyanakrishnan, Shivaram
    Stone, Peter
    Liu, Yaxin
    [J]. ROBOCUP 2007: ROBOT SOCCER WORLD CUP XI, 2008, 5001 : 171 - 183
  • [40] Lipschitz Continuity in Model-based Reinforcement Learning
    Asadi, Kavosh
    Misra, Dipendra
    Littman, Michael L.
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80