Building Markov Decision Process Based Models of Remote Experimental Setups for State Evaluation

被引:2
|
作者
Maiti, Ananda [1 ]
Kist, Alexander A. [1 ]
Maxwell, Andrew D. [1 ]
机构
[1] Univ Southern Queensland, Sch Mec & Elect Engn, Toowoomba, Qld 4350, Australia
关键词
D O I
10.1109/SSCI.2015.65
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Remote Access Laboratories (RAL) are online environments that allows the users to interact with instruments through the Internet. RALs are governed by a Remote Laboratory Management System (RLMS) that provides the specific control technology and control policies with regards to an experiment and the corresponding hardware. Normally, in a centralized RAL these control strategies and policies are created by the experiment providers in the RLMS. In a distributed Peer-to-Peer RAL scenario, individual users designing their own rigs and are incapable of producing and enforcing the control policies to ensure safe and stable use of the experimental rigs. Thus the experiment controllers in such a scenario have to be smart enough to learn and enforce those policies. This paper discusses a method to create Markov's Decision Process from the user's interactions with the experimental rig and use it to ensure stability as well as support other users by evaluating the current state of the rig in their experimental session.
引用
收藏
页码:389 / 397
页数:9
相关论文
共 50 条
  • [21] HIERARCHICAL MARKOV DECISION PROCESS BASED ON DEVS FORMALISM
    Kessler, Celine
    Capocchi, Laurent
    Santucci, Jean-Francois
    Zeigler, Bernard
    2017 WINTER SIMULATION CONFERENCE (WSC), 2017, : 1001 - 1012
  • [22] A Tensor-Based Markov Decision Process Representation
    Kuinchtner, Daniela
    Meneguzzi, Felipe
    Sales, Afonso
    ADVANCES IN SOFT COMPUTING, MICAI 2020, PT I, 2020, 12468 : 313 - 324
  • [23] Design of Opportunistic Routing Based on Markov Decision Process
    Hao, Jun
    Jia, Xinchun
    Han, Zongyuan
    Yang, Bo
    Peng, Dengyong
    PROCEEDINGS OF THE 36TH CHINESE CONTROL CONFERENCE (CCC 2017), 2017, : 8976 - 8981
  • [24] The Intervention of Community Evolution Based on Markov Decision Process
    Chai, Pei-Hua
    Man, Jun-Yi
    Zeng, Yi-Feng
    Cao, Lang-Cai
    Dongbei Daxue Xuebao/Journal of Northeastern University, 2022, 43 (11): : 1536 - 1543
  • [25] Approximate Regret Based Hicitation in Markov Decision Process
    Alizadeh, Pegah
    Chevaleyre, Yann
    Zucker, Jean-Daniel
    2015 IEEE RIVF INTERNATIONAL CONFERENCE ON COMPUTING & COMMUNICATION TECHNOLOGIES - RESEARCH, INNOVATION, AND VISION FOR THE FUTURE (RIVF), 2015, : 47 - 52
  • [26] A framework for building spreadsheet based decision models
    Mather, D
    JOURNAL OF THE OPERATIONAL RESEARCH SOCIETY, 1999, 50 (01) : 70 - 74
  • [27] Analysis and Comparison of Two Task Models in a Partially Observable Markov Decision Process Based Assistive System
    Jean-Baptiste, Emilie M. D.
    Mihailidis, Alex
    2017 IEEE 4TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING & MACHINE INTELLIGENCE (ISCMI), 2017, : 183 - 187
  • [28] SIMULATION-BASED SETS OF SIMILAR-PERFORMING ACTIONS IN FINITE MARKOV DECISION PROCESS MODELS
    Marrero, Wesley J.
    2022 WINTER SIMULATION CONFERENCE (WSC), 2022, : 3217 - 3228
  • [29] Approximate steady-state analysis of large Markov models based on the structure of their decision diagram encoding
    Wan, Min
    Ciardo, Gianfranco
    Miner, Andrew S.
    PERFORMANCE EVALUATION, 2011, 68 (05) : 463 - 486
  • [30] Rapid approximation of confidence intervals for Markov process decision models: Applications in decision support systems
    Cher, DJ
    Lenert, LA
    JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 1997, 4 (04) : 301 - 312