Building Markov Decision Process Based Models of Remote Experimental Setups for State Evaluation

被引:2
|
作者
Maiti, Ananda [1 ]
Kist, Alexander A. [1 ]
Maxwell, Andrew D. [1 ]
机构
[1] Univ Southern Queensland, Sch Mec & Elect Engn, Toowoomba, Qld 4350, Australia
关键词
D O I
10.1109/SSCI.2015.65
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Remote Access Laboratories (RAL) are online environments that allows the users to interact with instruments through the Internet. RALs are governed by a Remote Laboratory Management System (RLMS) that provides the specific control technology and control policies with regards to an experiment and the corresponding hardware. Normally, in a centralized RAL these control strategies and policies are created by the experiment providers in the RLMS. In a distributed Peer-to-Peer RAL scenario, individual users designing their own rigs and are incapable of producing and enforcing the control policies to ensure safe and stable use of the experimental rigs. Thus the experiment controllers in such a scenario have to be smart enough to learn and enforce those policies. This paper discusses a method to create Markov's Decision Process from the user's interactions with the experimental rig and use it to ensure stability as well as support other users by evaluating the current state of the rig in their experimental session.
引用
收藏
页码:389 / 397
页数:9
相关论文
共 50 条
  • [41] Interval dominance based structural results for Markov decision process
    Krishnamurthy, Vikram
    AUTOMATICA, 2023, 153
  • [42] Planning for Target System Striking Based on Markov Decision Process
    Lei Ting
    Zhu Cheng
    Zhang Weiming
    2013 IEEE INTERNATIONAL CONFERENCE ON SERVICE OPERATIONS AND LOGISTICS, AND INFORMATICS (SOLI), 2013, : 154 - 159
  • [43] Customer lifetime value management based on markov decision process
    Tian, ZX
    He, Y
    ICIM' 2004: PROCEEDINGS OF THE SEVENTH INTERNATIONAL CONFERENCE ON INDUSTRIAL MANAGEMENT, 2004, : 549 - 554
  • [44] A Markov Decision Process-based handicap system for tennis
    Chan, Timothy C. Y.
    Singal, Raghav
    JOURNAL OF QUANTITATIVE ANALYSIS IN SPORTS, 2016, 12 (04) : 179 - 189
  • [45] Seamless Mobility of Heterogeneous Networks Based on Markov Decision Process
    Preethi, G. A.
    Chandrasekar, C.
    JOURNAL OF INFORMATION PROCESSING SYSTEMS, 2015, 11 (04): : 616 - 629
  • [46] Optimal Replacement Policy of Services Based on Markov Decision Process
    Pillai, Sandhya S.
    Narendra, Nanjangud C.
    2009 IEEE INTERNATIONAL CONFERENCE ON SERVICES COMPUTING, 2009, : 176 - +
  • [47] A Markov Decision Process-Based Opportunistic Spectral Access
    Arunthavanathan, Senthuran
    Kandeepan, Sithamparanathan
    Evans, Robin. J.
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2016, 5 (05) : 544 - 547
  • [48] Building Markov State Models for Periodically Driven Non-Equilibrium Systems
    Wang, Han
    Schuette, Christof
    JOURNAL OF CHEMICAL THEORY AND COMPUTATION, 2015, 11 (04) : 1819 - 1831
  • [49] A new class of enhanced kinetic sampling methods for building Markov state models
    Bhoutekar, Arti
    Ghosh, Susmita
    Bhattacharya, Swati
    Chatterjee, Abhijit
    JOURNAL OF CHEMICAL PHYSICS, 2017, 147 (15):
  • [50] Learning hierarchical partially observable Markov decision process models for robot navigation
    Theocharous, G
    Rohanimanesh, K
    Mahadevan, S
    2001 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS I-IV, PROCEEDINGS, 2001, : 511 - 516