Learning Temporal Plan Preferences from Examples: An Empirical Study

被引:0
|
作者
Seimetz, Valentin [1 ]
Eifler, Rebecca [2 ]
Hoffmann, Joerg [2 ]
机构
[1] German Res Ctr Artificial Intelligence DFKI, Saarbrucken, Germany
[2] Saarland Univ, Saarland Informat Campus, Saarbrucken, Germany
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Temporal plan preferences are natural and important in a variety of applications. Yet users often find it difficult to formalize their preferences. Here we explore the possibility to learn preferences from example plans. Focusing on one preference at a time, the user is asked to annotate examples as good/bad. We leverage prior work on LTL formula learning to extract a preference from these examples. We conduct an empirical study of this approach in an oversubscription planning context, using hidden target formulas to emulate the user preferences. We explore four different methods for generating example plans, and evaluate performance as a function of domain and formula size. Overall, we find that reasonable-size target formulas can often be learned effectively.
引用
收藏
页码:4160 / 4166
页数:7
相关论文
共 50 条
  • [1] Inductive learning from preclassified training examples: An empirical study
    Li, WQ
    Aiken, M
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS, 1998, 28 (02): : 288 - 295
  • [2] Learning with few examples: an empirical study on leading classifiers
    Salperwyck, Christophe
    Lemaire, Vincent
    2011 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2011, : 1010 - 1019
  • [3] The complexity of learning linear temporal formulas from examples
    Fijalkow, Nathanael
    Lagarde, Guillaume
    INTERNATIONAL CONFERENCE ON GRAMMATICAL INFERENCE, VOL 153, 2021, 153 : 237 - 250
  • [4] Learning Interpretable Temporal Properties from Positive Examples Only
    Roy, Rajarshi
    Gaglione, Jean-Raphael
    Baharisangari, Nasim
    Neider, Daniel
    Xu, Zhe
    Topcu, Ufuk
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 5, 2023, : 6507 - 6515
  • [5] EMPLOYEE PREFERENCES FOR NONTAXABLE COMPENSATION OFFERED IN A CAFETERIA COMPENSATION PLAN - AN EMPIRICAL-STUDY
    WHITE, RA
    ACCOUNTING REVIEW, 1983, 58 (03): : 539 - 561
  • [6] Learning action models from plan examples using weighted MAX-SAT
    Yang, Qiang
    Wu, Kangheng
    Jiang, Yunfei
    ARTIFICIAL INTELLIGENCE, 2007, 171 (2-3) : 107 - 143
  • [7] Learning preferences on temporal constraints: A preliminary report
    Rossi, F
    Sperduti, A
    Khatib, L
    Morris, P
    Morris, R
    EIGHTH INTERNATIONAL SYMPOSIUM ON TEMPORAL REPRESENTATION AND REASONING, PROCEEDINGS, 2001, : 63 - 68
  • [8] Learners' Attention Preferences and Learning Paths on Online learning content: An Empirical Study Based on Eye Movement
    Mu, Su
    Cui, Meng
    Wang, Xiao Jin
    Qiao, Jin Xiu
    Tang, Dong Mei
    2018 SEVENTH INTERNATIONAL CONFERENCE OF EDUCATIONAL INNOVATION THROUGH TECHNOLOGY (EITT 2018), 2018, : 32 - 35
  • [9] A cinefluorographic study of the temporal organization of articulator gestures: Examples from Greenlandic
    Wood, SAJ
    SPEECH COMMUNICATION, 1997, 22 (2-3) : 207 - 225
  • [10] TeLEx: learning signal temporal logic from positive examples using tightness metric
    Jha, Susmit
    Tiwari, Ashish
    Seshia, Sanjit A.
    Sahai, Tuhin
    Shankar, Natarajan
    FORMAL METHODS IN SYSTEM DESIGN, 2019, 54 (03) : 364 - 387