Building Markov Decision Process Based Models of Remote Experimental Setups for State Evaluation

被引:2
|
作者
Maiti, Ananda [1 ]
Kist, Alexander A. [1 ]
Maxwell, Andrew D. [1 ]
机构
[1] Univ Southern Queensland, Sch Mec & Elect Engn, Toowoomba, Qld 4350, Australia
关键词
D O I
10.1109/SSCI.2015.65
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Remote Access Laboratories (RAL) are online environments that allows the users to interact with instruments through the Internet. RALs are governed by a Remote Laboratory Management System (RLMS) that provides the specific control technology and control policies with regards to an experiment and the corresponding hardware. Normally, in a centralized RAL these control strategies and policies are created by the experiment providers in the RLMS. In a distributed Peer-to-Peer RAL scenario, individual users designing their own rigs and are incapable of producing and enforcing the control policies to ensure safe and stable use of the experimental rigs. Thus the experiment controllers in such a scenario have to be smart enough to learn and enforce those policies. This paper discusses a method to create Markov's Decision Process from the user's interactions with the experimental rig and use it to ensure stability as well as support other users by evaluating the current state of the rig in their experimental session.
引用
收藏
页码:389 / 397
页数:9
相关论文
共 50 条
  • [1] Software for Building Markov State Models
    Bowman, Gregory R.
    Noe, Frank
    INTRODUCTION TO MARKOV STATE MODELS AND THEIR APPLICATION TO LONG TIMESCALE MOLECULAR SIMULATION, 2014, 797 : 139 - 139
  • [2] Using Markov Decision Process for Recommendations Based on Aggregated Decision Data Models
    Petrusel, Razvan
    BUSINESS INFORMATION SYSTEMS, BIS 2013, 2013, 157 : 125 - 137
  • [3] Building Markov state models with solvent dynamics
    Chen Gu
    Huang-Wei Chang
    Lutz Maibaum
    Vijay S Pande
    Gunnar E Carlsson
    Leonidas J Guibas
    BMC Bioinformatics, 14
  • [4] Building Markov state models with solvent dynamics
    Gu, Chen
    Chang, Huang-Wei
    Maibaum, Lutz
    Pande, Vijay S.
    Carlsson, Gunnar E.
    Guibas, Leonidas J.
    BMC BIOINFORMATICS, 2013, 14
  • [5] A Markov decision process with delayed state availability
    White, CC
    Bander, JL
    INFORMATION INTELLIGENCE AND SYSTEMS, VOLS 1-4, 1996, : 2689 - 2691
  • [6] Evaluation and Optimization of Kernel File Readaheads Based on Markov Decision Models
    Xu, Chenfeng
    Xi, Hongsheng
    Wu, Fengguang
    COMPUTER JOURNAL, 2011, 54 (11): : 1741 - 1755
  • [7] Implementation and Evaluation of Adaptive Video Streaming based on Markov Decision Process
    Bokani, Ayub
    Hoseini, S. Amir
    Hassan, Mahbub
    Kanhere, Sahl S.
    2016 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2016,
  • [8] An Overview and Practical Guide to Building Markov State Models
    Bowman, Gregory R.
    INTRODUCTION TO MARKOV STATE MODELS AND THEIR APPLICATION TO LONG TIMESCALE MOLECULAR SIMULATION, 2014, 797 : 7 - 22
  • [9] Embedding a state space model into a Markov decision process
    Lars Relund Nielsen
    Erik Jørgensen
    Søren Højsgaard
    Annals of Operations Research, 2011, 190 : 289 - 309
  • [10] Embedding a state space model into a Markov decision process
    Nielsen, Lars Relund
    Jorgensen, Erik
    Hojsgaard, Soren
    ANNALS OF OPERATIONS RESEARCH, 2011, 190 (01) : 289 - 309