Building Markov Decision Process Based Models of Remote Experimental Setups for State Evaluation

被引:2
|
作者
Maiti, Ananda [1 ]
Kist, Alexander A. [1 ]
Maxwell, Andrew D. [1 ]
机构
[1] Univ Southern Queensland, Sch Mec & Elect Engn, Toowoomba, Qld 4350, Australia
关键词
D O I
10.1109/SSCI.2015.65
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Remote Access Laboratories (RAL) are online environments that allows the users to interact with instruments through the Internet. RALs are governed by a Remote Laboratory Management System (RLMS) that provides the specific control technology and control policies with regards to an experiment and the corresponding hardware. Normally, in a centralized RAL these control strategies and policies are created by the experiment providers in the RLMS. In a distributed Peer-to-Peer RAL scenario, individual users designing their own rigs and are incapable of producing and enforcing the control policies to ensure safe and stable use of the experimental rigs. Thus the experiment controllers in such a scenario have to be smart enough to learn and enforce those policies. This paper discusses a method to create Markov's Decision Process from the user's interactions with the experimental rig and use it to ensure stability as well as support other users by evaluating the current state of the rig in their experimental session.
引用
收藏
页码:389 / 397
页数:9
相关论文
共 50 条
  • [31] Incorporating risk attitude into Markov-process decision models: Importance for individual decision making
    Cher, DJ
    Miyamoto, J
    Lenert, LA
    MEDICAL DECISION MAKING, 1997, 17 (03) : 340 - 350
  • [32] Markov reward models and markov decision processes in discrete and continuous time: Performance evaluation and optimization
    Gouberman, Alexander
    Siegle, Markus
    Gouberman, Alexander (alexander.gouberman@unibw.de), 1600, Springer Verlag (8453): : 156 - 241
  • [33] Making decision for sustainable product lifecycle: a Markov decision process based method
    Wang, Junfeng
    Wang, Ning
    Rao, Jinfeng
    FRONTIERS OF MANUFACTURING AND DESIGN SCIENCE II, PTS 1-6, 2012, 121-126 : 2080 - 2084
  • [34] Weapon Target Assignment Decision Based on Markov Decision Process in Air Defense
    Ma, Yaofei
    Chou, Chaohong
    SYSTEM SIMULATION AND SCIENTIFIC COMPUTING, PT II, 2012, 327 : 353 - 360
  • [35] Strategic Decision for Crowd-Sensing: An Approach based on Markov Decision Process
    Ray, Arpita
    Chowdhury, Chandreyee
    Roy, Sarbani
    2017 IEEE INTERNATIONAL CONFERENCE ON ADVANCED NETWORKS AND TELECOMMUNICATIONS SYSTEMS (ANTS), 2017,
  • [36] Process Evaluation for Concept Map Building and Its Experimental Evaluation
    Rismanto, Ridwan
    Pinandito, Aryo
    Andoko, Banni Satria
    Hayashi, Yusuke
    Hirashima, Tsukasa
    31ST INTERNATIONAL CONFERENCE ON COMPUTERS IN EDUCATION, ICCE 2023, VOL I, 2023, : 362 - 371
  • [37] Automatic Decision of Piano Fingering Based on Hidden Markov Models
    Yonebayashi, Yuichiro
    Kameoka, Hirokazu
    Sagayama, Shigeki
    20TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2007, : 2915 - 2921
  • [38] A statistical property of multiagent learning based on Markov decision process
    Iwata, Kazunori
    Ikeda, Kazushi
    Sakai, Hideaki
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2006, 17 (04): : 829 - 842
  • [39] Quality Control for Express Items Based on Markov Decision Process
    Han, Xu
    Li, Yisong
    2016 INTERNATIONAL CONFERENCE ON LOGISTICS, INFORMATICS AND SERVICE SCIENCES (LISS' 2016), 2016,
  • [40] Markov Decision Process Based Wireless Multicast Opportunistic Routing
    Ma Dianbo
    Tan Xiaobin
    Zhou Zijian
    Yu Shanjin
    2014 33RD CHINESE CONTROL CONFERENCE (CCC), 2014, : 5509 - 5514