Human-in-the-Loop Synthesis for Partially Observable Markov Decision Processes

被引:0
|
作者
Carr, Steven [1 ]
Jansen, Nils [2 ]
Wimmer, Ralf [3 ]
Fu, Jie [4 ]
Topcu, Ufuk [1 ]
机构
[1] Univ Texas Austin, Austin, TX 78712 USA
[2] Radboud Univ Nijmegen, Nijmegen, Netherlands
[3] Albert Ludwigs Univ Freiburg, Freiburg, Germany
[4] Worcester Polytech Inst WPI, Worcester, MA USA
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We study planning problems where autonomous agents operate inside environments that are subject to uncertainties and not fully observable. Partially observable Markov decision processes (POMDPs) are a natural formal model to capture such problems. Because of the potentially huge or even infinite belief space in POMDPs, synthesis with safety guarantees is, in general, computationally intractable. We propose an approach that aims to circumvent this difficulty: in scenarios that can be partially or fully simulated in a virtual environment, we actively integrate a human user to control an agent. While the user repeatedly tries to safely guide the agent in the simulation, we collect data from the human input. Via behavior cloning, we translate the data into a strategy for the POMDP. The strategy resolves all nondeterminism and non-observability of the POMDP, resulting in a discrete-time Markov chain (MC). The efficient verification of this MC gives quantitative insights into the quality of the inferred human strategy by proving or disproving given system specifications. For the case that the quality of the strategy is not sufficient, we propose a refinement method using counterexamples presented to the human. Experiments show that by including humans into the POMDP verification loop we improve the state of the art by orders of magnitude in terms of scalability.
引用
收藏
页码:762 / 769
页数:8
相关论文
共 50 条
  • [41] POSITION VALIDATION STRATEGIES USING PARTIALLY OBSERVABLE MARKOV DECISION PROCESSES
    Kochenderfer, Mykel J.
    Shih, Kevin J.
    Chryssanthacopoulos, James P.
    Rose, Charles E.
    Elder, Tomas R.
    [J]. 2011 IEEE/AIAA 30TH DIGITAL AVIONICS SYSTEMS CONFERENCE (DASC), 2011,
  • [42] On the Relationship Between Stochastic Satisfiability and Partially Observable Markov Decision Processes
    Salmon, Ricardo
    Poupart, Pascal
    [J]. 35TH UNCERTAINTY IN ARTIFICIAL INTELLIGENCE CONFERENCE (UAI 2019), 2020, 115 : 1105 - 1115
  • [43] Ambiguous partially observable Markov decision processes: Structural results and applications
    Saghafian, Soroush
    [J]. JOURNAL OF ECONOMIC THEORY, 2018, 178 : 1 - 35
  • [44] Modeling speech using Partially Observable Markov Decision Processes (POMDP)
    Jonas, M
    [J]. 2001 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOLS I-VI, PROCEEDINGS: VOL I: SPEECH PROCESSING 1; VOL II: SPEECH PROCESSING 2 IND TECHNOL TRACK DESIGN & IMPLEMENTATION OF SIGNAL PROCESSING SYSTEMS NEURALNETWORKS FOR SIGNAL PROCESSING; VOL III: IMAGE & MULTIDIMENSIONAL SIGNAL PROCESSING MULTIMEDIA SIGNAL PROCESSING - VOL IV: SIGNAL PROCESSING FOR COMMUNICATIONS; VOL V: SIGNAL PROCESSING EDUCATION SENSOR ARRAY & MULTICHANNEL SIGNAL PROCESSING AUDIO & ELECTROACOUSTICS; VOL VI: SIGNAL PROCESSING THEORY & METHODS STUDENT FORUM, 2001, : 4016 - 4016
  • [45] ON PARTIALLY OBSERVABLE MARKOV DECISION-PROCESSES WITH AN AVERAGE COST CRITERION
    FERNANDEZGAUCHERAND, E
    ARAPOSTATHIS, A
    MARCUS, SI
    [J]. PROCEEDINGS OF THE 28TH IEEE CONFERENCE ON DECISION AND CONTROL, VOLS 1-3, 1989, : 1267 - 1273
  • [46] Oracular partially observable Markov decision processes: A very special case
    Armstrong-Crews, Nicholas
    Veloso, Manuela
    [J]. PROCEEDINGS OF THE 2007 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-10, 2007, : 2477 - +
  • [47] Recursive learning automata for control of partially observable Markov decision processes
    Chang, Hyeong Soo
    Fu, Michael C.
    Marcus, Steven I.
    [J]. 2005 44TH IEEE CONFERENCE ON DECISION AND CONTROL & EUROPEAN CONTROL CONFERENCE, VOLS 1-8, 2005, : 6091 - 6096
  • [48] Partially observable Markov decision processes for risk-based screening
    Mrozack, Alex
    Liao, Xuejun
    Skatter, Sondre
    Carin, Lawrence
    [J]. ANOMALY DETECTION AND IMAGING WITH X-RAYS (ADIX), 2016, 9847
  • [49] Automatic recovery using bounded partially observable Markov decision processes
    Joshi, Kaustubh R.
    Hiltunen, Matti A.
    Sanders, William H.
    Schlichting, Richard D.
    [J]. DSN 2006 INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS, PROCEEDINGS, 2006, : 445 - 454
  • [50] Parallel Rollout for Online Solution of Partially Observable Markov Decision Processes
    Hyeong Soo Chang
    Robert Givan
    Edwin K. P. Chong
    [J]. Discrete Event Dynamic Systems, 2004, 14 : 309 - 341